Feb 17 14:06:12 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 14:06:12 crc restorecon[4703]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 14:06:13 crc restorecon[4703]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 14:06:14 crc kubenswrapper[4836]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 14:06:14 crc kubenswrapper[4836]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 14:06:14 crc kubenswrapper[4836]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 14:06:14 crc kubenswrapper[4836]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 14:06:14 crc kubenswrapper[4836]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 14:06:14 crc kubenswrapper[4836]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.187758 4836 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.191847 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.191968 4836 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192057 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192123 4836 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192183 4836 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192252 4836 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192369 4836 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192451 4836 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192515 4836 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192574 4836 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192643 4836 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192719 4836 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192786 4836 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192849 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192916 4836 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.192993 4836 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193069 4836 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193131 4836 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193196 4836 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193257 4836 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193365 4836 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193430 4836 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193502 4836 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193581 4836 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193660 4836 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193755 4836 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193823 4836 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193904 4836 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.193984 4836 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194067 4836 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194142 4836 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194207 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194274 4836 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194384 4836 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194457 4836 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194525 4836 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194590 4836 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194673 4836 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194769 4836 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194841 4836 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194904 4836 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.194990 4836 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195082 4836 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195145 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195204 4836 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195263 4836 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195402 4836 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195496 4836 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195574 4836 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195657 4836 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195730 4836 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195804 4836 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195868 4836 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.195934 4836 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196033 4836 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196107 4836 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196170 4836 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196232 4836 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196339 4836 feature_gate.go:330] unrecognized feature gate: Example Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196418 4836 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196503 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196600 4836 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196713 4836 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196802 4836 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196867 4836 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.196935 4836 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.197035 4836 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.197127 4836 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.197416 4836 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.197488 4836 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.197549 4836 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199326 4836 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199360 4836 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199375 4836 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199386 4836 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199396 4836 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199405 4836 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199416 4836 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199427 4836 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199435 4836 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199442 4836 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199451 4836 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199460 4836 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199468 4836 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199476 4836 flags.go:64] FLAG: --cgroup-root="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199483 4836 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199491 4836 flags.go:64] FLAG: --client-ca-file="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199498 4836 flags.go:64] FLAG: --cloud-config="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199506 4836 flags.go:64] FLAG: --cloud-provider="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199513 4836 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199522 4836 flags.go:64] FLAG: --cluster-domain="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199529 4836 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199537 4836 flags.go:64] FLAG: --config-dir="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199544 4836 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199552 4836 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199562 4836 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199569 4836 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199577 4836 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199585 4836 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199592 4836 flags.go:64] FLAG: --contention-profiling="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199600 4836 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199607 4836 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199624 4836 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199664 4836 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199674 4836 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199681 4836 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199689 4836 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199699 4836 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199707 4836 flags.go:64] FLAG: --enable-server="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199714 4836 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199727 4836 flags.go:64] FLAG: --event-burst="100" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199735 4836 flags.go:64] FLAG: --event-qps="50" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199742 4836 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199750 4836 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199757 4836 flags.go:64] FLAG: --eviction-hard="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199767 4836 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199774 4836 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199781 4836 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199789 4836 flags.go:64] FLAG: --eviction-soft="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199797 4836 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199804 4836 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199812 4836 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199820 4836 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199828 4836 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199836 4836 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199843 4836 flags.go:64] FLAG: --feature-gates="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199852 4836 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199860 4836 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199868 4836 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199876 4836 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199883 4836 flags.go:64] FLAG: --healthz-port="10248" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199891 4836 flags.go:64] FLAG: --help="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199899 4836 flags.go:64] FLAG: --hostname-override="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199906 4836 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199914 4836 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199921 4836 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199928 4836 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199936 4836 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199944 4836 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199952 4836 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199959 4836 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199966 4836 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199974 4836 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199984 4836 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199991 4836 flags.go:64] FLAG: --kube-reserved="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.199999 4836 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200007 4836 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200014 4836 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200022 4836 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200030 4836 flags.go:64] FLAG: --lock-file="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200037 4836 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200045 4836 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200053 4836 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200065 4836 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200074 4836 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200081 4836 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200089 4836 flags.go:64] FLAG: --logging-format="text" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200096 4836 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200104 4836 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200112 4836 flags.go:64] FLAG: --manifest-url="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200119 4836 flags.go:64] FLAG: --manifest-url-header="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200130 4836 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200138 4836 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200147 4836 flags.go:64] FLAG: --max-pods="110" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200154 4836 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200162 4836 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200169 4836 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200177 4836 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200185 4836 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200194 4836 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200202 4836 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200225 4836 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200233 4836 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200240 4836 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200250 4836 flags.go:64] FLAG: --pod-cidr="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200257 4836 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200287 4836 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200316 4836 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200326 4836 flags.go:64] FLAG: --pods-per-core="0" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200334 4836 flags.go:64] FLAG: --port="10250" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200342 4836 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200350 4836 flags.go:64] FLAG: --provider-id="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200357 4836 flags.go:64] FLAG: --qos-reserved="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200365 4836 flags.go:64] FLAG: --read-only-port="10255" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200373 4836 flags.go:64] FLAG: --register-node="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200380 4836 flags.go:64] FLAG: --register-schedulable="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200388 4836 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200403 4836 flags.go:64] FLAG: --registry-burst="10" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200410 4836 flags.go:64] FLAG: --registry-qps="5" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200417 4836 flags.go:64] FLAG: --reserved-cpus="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200424 4836 flags.go:64] FLAG: --reserved-memory="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200434 4836 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200442 4836 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200450 4836 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200457 4836 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200464 4836 flags.go:64] FLAG: --runonce="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200472 4836 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200480 4836 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200487 4836 flags.go:64] FLAG: --seccomp-default="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200494 4836 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200502 4836 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200510 4836 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200518 4836 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200526 4836 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200533 4836 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200541 4836 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200549 4836 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200557 4836 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200565 4836 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200572 4836 flags.go:64] FLAG: --system-cgroups="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200579 4836 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200592 4836 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200600 4836 flags.go:64] FLAG: --tls-cert-file="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200607 4836 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200616 4836 flags.go:64] FLAG: --tls-min-version="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200625 4836 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200632 4836 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200640 4836 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200647 4836 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200656 4836 flags.go:64] FLAG: --v="2" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200666 4836 flags.go:64] FLAG: --version="false" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200675 4836 flags.go:64] FLAG: --vmodule="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200685 4836 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.200693 4836 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202779 4836 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202795 4836 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202803 4836 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202810 4836 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202818 4836 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202825 4836 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202832 4836 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202839 4836 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202845 4836 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202852 4836 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202859 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202865 4836 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202873 4836 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202881 4836 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202888 4836 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202896 4836 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202903 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202910 4836 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202918 4836 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202927 4836 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202934 4836 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202941 4836 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202948 4836 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202955 4836 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202961 4836 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202968 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202974 4836 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202983 4836 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.202992 4836 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203000 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203007 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203016 4836 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203025 4836 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203033 4836 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203039 4836 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203047 4836 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203055 4836 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203064 4836 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203071 4836 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203077 4836 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203084 4836 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203090 4836 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203097 4836 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203103 4836 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203109 4836 feature_gate.go:330] unrecognized feature gate: Example Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203116 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203122 4836 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203129 4836 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203135 4836 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203141 4836 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203148 4836 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203155 4836 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203161 4836 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203167 4836 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203173 4836 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203179 4836 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203185 4836 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203192 4836 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203199 4836 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203205 4836 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203211 4836 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203218 4836 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203224 4836 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203231 4836 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203238 4836 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203244 4836 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203251 4836 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203257 4836 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203263 4836 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203269 4836 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.203275 4836 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.208433 4836 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.236811 4836 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.237671 4836 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237934 4836 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237951 4836 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237960 4836 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237967 4836 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237975 4836 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237982 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237988 4836 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237993 4836 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.237998 4836 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238003 4836 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238009 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238015 4836 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238020 4836 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238026 4836 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238032 4836 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238037 4836 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238042 4836 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238047 4836 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238052 4836 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238057 4836 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238063 4836 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238068 4836 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238075 4836 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238081 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238088 4836 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238094 4836 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238100 4836 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238106 4836 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238111 4836 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238117 4836 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238123 4836 feature_gate.go:330] unrecognized feature gate: Example Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238128 4836 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238134 4836 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238139 4836 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238145 4836 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238149 4836 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238154 4836 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238159 4836 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238164 4836 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238169 4836 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238173 4836 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238179 4836 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238183 4836 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238189 4836 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238194 4836 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238199 4836 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238204 4836 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238209 4836 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238214 4836 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238218 4836 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238225 4836 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238232 4836 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238237 4836 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238243 4836 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238249 4836 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238254 4836 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238259 4836 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238264 4836 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238270 4836 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238275 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238282 4836 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238287 4836 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238314 4836 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238320 4836 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238325 4836 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238330 4836 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238336 4836 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238341 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238346 4836 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238351 4836 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238355 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.238365 4836 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238523 4836 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238532 4836 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238584 4836 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238590 4836 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238595 4836 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238601 4836 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238605 4836 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238610 4836 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238615 4836 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238622 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238627 4836 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238632 4836 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238637 4836 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238641 4836 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238647 4836 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238652 4836 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238660 4836 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238668 4836 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238673 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238679 4836 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238685 4836 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238692 4836 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238698 4836 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238702 4836 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238707 4836 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238713 4836 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238718 4836 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238722 4836 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238727 4836 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238732 4836 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238738 4836 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238743 4836 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238747 4836 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238754 4836 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238760 4836 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238765 4836 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238772 4836 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238778 4836 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238782 4836 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238788 4836 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238794 4836 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238799 4836 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238804 4836 feature_gate.go:330] unrecognized feature gate: Example Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238810 4836 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238816 4836 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238822 4836 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238828 4836 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238834 4836 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238840 4836 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238847 4836 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238853 4836 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238860 4836 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238868 4836 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238875 4836 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238882 4836 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238890 4836 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238897 4836 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238906 4836 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238913 4836 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238920 4836 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238926 4836 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238932 4836 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238938 4836 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238943 4836 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238948 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238953 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238959 4836 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238965 4836 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238970 4836 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238975 4836 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.238979 4836 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.238988 4836 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.240155 4836 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.249199 4836 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.249416 4836 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.251564 4836 server.go:997] "Starting client certificate rotation" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.251606 4836 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.251810 4836 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-12 02:47:06.199031913 +0000 UTC Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.251912 4836 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.286126 4836 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.289015 4836 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.290335 4836 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.307371 4836 log.go:25] "Validated CRI v1 runtime API" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.376949 4836 log.go:25] "Validated CRI v1 image API" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.379723 4836 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.387709 4836 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-14-00-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.387755 4836 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.423886 4836 manager.go:217] Machine: {Timestamp:2026-02-17 14:06:14.411030626 +0000 UTC m=+0.753958945 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f194f106-0bf2-4b65-bcb3-5215631b39d2 BootID:d638d470-b0e0-4be9-938f-7ec815bf6bd8 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:61:d6:c8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:61:d6:c8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:40:38:0b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:56:ea:a0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9c:ba:c2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:91:cc:ac Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:3c:b3:3b:43:d3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:39:ac:01:6a:5e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.424197 4836 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.424443 4836 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.426036 4836 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.426275 4836 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.426347 4836 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.426602 4836 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.426618 4836 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.427114 4836 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.427167 4836 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.429192 4836 state_mem.go:36] "Initialized new in-memory state store" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.429302 4836 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.436324 4836 kubelet.go:418] "Attempting to sync node with API server" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.436355 4836 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.436393 4836 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.436409 4836 kubelet.go:324] "Adding apiserver pod source" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.436434 4836 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.444771 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.444901 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.450702 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.450776 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.453760 4836 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.455970 4836 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.460475 4836 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.462895 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.462934 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.462949 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.462962 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.462988 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.463005 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.463018 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.463037 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.463051 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.463063 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.463084 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.463096 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.464206 4836 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.464891 4836 server.go:1280] "Started kubelet" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.465256 4836 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.466243 4836 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.466704 4836 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.466889 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.466930 4836 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.466987 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:44:57.915309409 +0000 UTC Feb 17 14:06:14 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.467503 4836 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.467602 4836 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.467611 4836 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.467744 4836 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.468330 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.468485 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.468535 4836 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.469605 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="200ms" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.472070 4836 factory.go:55] Registering systemd factory Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.472098 4836 factory.go:221] Registration of the systemd container factory successfully Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.472504 4836 factory.go:153] Registering CRI-O factory Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.472515 4836 factory.go:221] Registration of the crio container factory successfully Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.472728 4836 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.472773 4836 factory.go:103] Registering Raw factory Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.472794 4836 manager.go:1196] Started watching for new ooms in manager Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.473681 4836 manager.go:319] Starting recovery of all containers Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.497103 4836 server.go:460] "Adding debug handlers to kubelet server" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498575 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498657 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498672 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498685 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498705 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498716 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498727 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498852 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498869 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498882 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498895 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498909 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498920 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498935 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498948 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498962 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498976 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.498988 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499000 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499012 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499024 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499036 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499049 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499061 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499075 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499088 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499103 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499116 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499162 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499177 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499191 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499205 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499219 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499231 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499244 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499256 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499267 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499278 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499290 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499320 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499332 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499344 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499356 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499371 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499383 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499395 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499406 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499419 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499430 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499442 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499453 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499464 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499480 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499493 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499505 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499518 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499531 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499542 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499554 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499565 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499576 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499587 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499600 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499617 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499627 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499638 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499650 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499662 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499673 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499685 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499697 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499707 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499717 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499725 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499734 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499742 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499751 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499760 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499769 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499779 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499787 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499796 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499805 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499813 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499822 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499838 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499850 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499859 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499869 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499877 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499886 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499895 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499904 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499914 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499922 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499936 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499945 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499953 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499963 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499972 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499982 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.499991 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500000 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500010 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500024 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.498168 4836 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18950dc6756be7a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 14:06:14.464849831 +0000 UTC m=+0.807778130,LastTimestamp:2026-02-17 14:06:14.464849831 +0000 UTC m=+0.807778130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500033 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500124 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500172 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500196 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500216 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500238 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500258 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500278 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500326 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500349 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500369 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500388 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500407 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.500426 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501065 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501097 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501112 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501127 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501146 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501160 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501175 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501190 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501206 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501220 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501235 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501248 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501262 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501277 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501343 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501365 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501380 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501396 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501410 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501425 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501440 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501458 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501496 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501516 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501534 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501555 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501570 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501585 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501599 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501613 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501627 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501642 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501657 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501671 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501689 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501704 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501719 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501734 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501748 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501761 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501775 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501794 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501811 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501825 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501873 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501888 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501902 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.501915 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.507258 4836 manager.go:324] Recovery completed Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.518821 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.520420 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.520458 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.520467 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.521307 4836 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.521323 4836 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.521339 4836 state_mem.go:36] "Initialized new in-memory state store" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.528942 4836 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529021 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529046 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529061 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529080 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529093 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529106 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529119 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529135 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529148 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529161 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529174 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529189 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529203 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529215 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529228 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529242 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529257 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529271 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529283 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529327 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529340 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529353 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529366 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529378 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529390 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529403 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529416 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529446 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529459 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529472 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529484 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529509 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529521 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529534 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529547 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529560 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529571 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529582 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529594 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529608 4836 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529619 4836 reconstruct.go:97] "Volume reconstruction finished" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.529628 4836 reconciler.go:26] "Reconciler: start to sync state" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.564156 4836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.565790 4836 policy_none.go:49] "None policy: Start" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.566600 4836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.566682 4836 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.566749 4836 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.566963 4836 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.566980 4836 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.566999 4836 state_mem.go:35] "Initializing new in-memory state store" Feb 17 14:06:14 crc kubenswrapper[4836]: W0217 14:06:14.567321 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.567399 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.567752 4836 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.642885 4836 manager.go:334] "Starting Device Plugin manager" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.643096 4836 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.643129 4836 server.go:79] "Starting device plugin registration server" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.643657 4836 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.643685 4836 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.643956 4836 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.644082 4836 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.644105 4836 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.654232 4836 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.667924 4836 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.668131 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.669691 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.669746 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.669760 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.669982 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.670571 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.670642 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="400ms" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.670676 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.675501 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.675562 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.675576 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.675794 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.675979 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.676035 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.676075 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.676095 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.676155 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.676889 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.676909 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.676918 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.677111 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.677150 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.677164 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.677354 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.677538 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.677614 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.678069 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.678091 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.678100 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.678199 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.678358 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.678411 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679189 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679216 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679229 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679399 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679430 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679735 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679755 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679763 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679886 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679902 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.679910 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.680564 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.680612 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.680633 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.743842 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.744929 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.744962 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.744971 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.744995 4836 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.745510 4836 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834118 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834177 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834207 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834232 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834315 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834360 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834378 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834394 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834412 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834430 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834509 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834608 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834660 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834686 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.834716 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.935873 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936344 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936381 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936409 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936438 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936447 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936511 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936517 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936474 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936542 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936521 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936598 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936628 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936632 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936121 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936671 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936696 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936716 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936660 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936738 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936761 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936801 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936824 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936806 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936803 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936855 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936897 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936824 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.936954 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.937037 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.946511 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.948130 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.948166 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.948177 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:14 crc kubenswrapper[4836]: I0217 14:06:14.948200 4836 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 14:06:14 crc kubenswrapper[4836]: E0217 14:06:14.948676 4836 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.000604 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.008965 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.025656 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.042767 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.047004 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:15 crc kubenswrapper[4836]: E0217 14:06:15.072240 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="800ms" Feb 17 14:06:15 crc kubenswrapper[4836]: W0217 14:06:15.161204 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8542663a80ad2dce8cc30d6656370152510811bebc10a960a687ed9d28809db6 WatchSource:0}: Error finding container 8542663a80ad2dce8cc30d6656370152510811bebc10a960a687ed9d28809db6: Status 404 returned error can't find the container with id 8542663a80ad2dce8cc30d6656370152510811bebc10a960a687ed9d28809db6 Feb 17 14:06:15 crc kubenswrapper[4836]: W0217 14:06:15.161614 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-79b4471c38baad726843d2388b9c56eddcde98457fb9444a0a0ba4fd89f9eac4 WatchSource:0}: Error finding container 79b4471c38baad726843d2388b9c56eddcde98457fb9444a0a0ba4fd89f9eac4: Status 404 returned error can't find the container with id 79b4471c38baad726843d2388b9c56eddcde98457fb9444a0a0ba4fd89f9eac4 Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.349402 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.351228 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.351280 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.351308 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.351349 4836 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 14:06:15 crc kubenswrapper[4836]: E0217 14:06:15.351834 4836 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.466548 4836 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.467580 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:06:36.231395621 +0000 UTC Feb 17 14:06:15 crc kubenswrapper[4836]: W0217 14:06:15.492492 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:15 crc kubenswrapper[4836]: E0217 14:06:15.492572 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.571699 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"31942fdcee1d24800a616c3b72535d0bfedb200e7db6cf8b9a5cb69248777533"} Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.573306 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79b4471c38baad726843d2388b9c56eddcde98457fb9444a0a0ba4fd89f9eac4"} Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.578146 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ac3aa5cc00f4173264fad5b484e64e792869df345c4a14cccf18b9917864c92e"} Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.580399 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8542663a80ad2dce8cc30d6656370152510811bebc10a960a687ed9d28809db6"} Feb 17 14:06:15 crc kubenswrapper[4836]: I0217 14:06:15.581727 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e06cb61d981c06c98abcbe7e0f8de1d57343d3a093805b6133bbfae4618ce6b8"} Feb 17 14:06:15 crc kubenswrapper[4836]: W0217 14:06:15.607269 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:15 crc kubenswrapper[4836]: E0217 14:06:15.607384 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:15 crc kubenswrapper[4836]: W0217 14:06:15.732653 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:15 crc kubenswrapper[4836]: E0217 14:06:15.732764 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:15 crc kubenswrapper[4836]: W0217 14:06:15.820473 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:15 crc kubenswrapper[4836]: E0217 14:06:15.820573 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:15 crc kubenswrapper[4836]: E0217 14:06:15.873423 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="1.6s" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.152700 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.154153 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.154195 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.154221 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.154251 4836 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 14:06:16 crc kubenswrapper[4836]: E0217 14:06:16.155082 4836 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.296585 4836 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 14:06:16 crc kubenswrapper[4836]: E0217 14:06:16.298160 4836 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.466555 4836 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.468740 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:48:01.364125556 +0000 UTC Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.587692 4836 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905" exitCode=0 Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.587845 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905"} Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.587876 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.589061 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.589093 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.589112 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.589717 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc"} Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.591124 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.591360 4836 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d" exitCode=0 Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.591442 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d"} Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.591799 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.592077 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.592106 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.592114 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.592683 4836 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="735aeae35fb2663b5537053014ee78275b2abd919cbedb4730f40aca0a6921fd" exitCode=0 Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.592741 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"735aeae35fb2663b5537053014ee78275b2abd919cbedb4730f40aca0a6921fd"} Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.592821 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.593929 4836 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1" exitCode=0 Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.593952 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1"} Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.594057 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.594066 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.594095 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.594113 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.594826 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.594848 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.594861 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.596134 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.596196 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:16 crc kubenswrapper[4836]: I0217 14:06:16.596217 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:17 crc kubenswrapper[4836]: W0217 14:06:17.436109 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:17 crc kubenswrapper[4836]: E0217 14:06:17.436187 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.466034 4836 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.469222 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:03:57.224778909 +0000 UTC Feb 17 14:06:17 crc kubenswrapper[4836]: E0217 14:06:17.557014 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="3.2s" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.611550 4836 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e" exitCode=0 Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.611616 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.611722 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.612422 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.612453 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.612463 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.614796 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e246e50c75b51b522a87eb1e3c23d1a8a008b63a663fc03fa1e5b7feef6451c7"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.614861 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.615660 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.615677 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.615685 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.617876 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.617947 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.619649 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.619666 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.621650 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.621671 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.621679 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36"} Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.621974 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.622596 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.622612 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.622620 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.755735 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.756828 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.756868 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.756879 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:17 crc kubenswrapper[4836]: I0217 14:06:17.756902 4836 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 14:06:17 crc kubenswrapper[4836]: E0217 14:06:17.757251 4836 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 17 14:06:18 crc kubenswrapper[4836]: W0217 14:06:18.224111 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:18 crc kubenswrapper[4836]: E0217 14:06:18.224206 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:18 crc kubenswrapper[4836]: E0217 14:06:18.426730 4836 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18950dc6756be7a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 14:06:14.464849831 +0000 UTC m=+0.807778130,LastTimestamp:2026-02-17 14:06:14.464849831 +0000 UTC m=+0.807778130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 14:06:18 crc kubenswrapper[4836]: W0217 14:06:18.426962 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:18 crc kubenswrapper[4836]: E0217 14:06:18.427022 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.466156 4836 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.469321 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:15:48.82668008 +0000 UTC Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.626698 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08"} Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.626785 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.627561 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.627591 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.627604 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.629798 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c1a52f71b0a88c92eb73fc0f119373f12587dd887e7dbb4a06dbb6e6d33d55c9"} Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.629929 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b"} Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.630020 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3"} Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.629873 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.631003 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.631111 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.631202 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.631508 4836 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2" exitCode=0 Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.631579 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2"} Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.631666 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.631691 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.632145 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.632425 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.632449 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.632460 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.632651 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.632769 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.632851 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.632789 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.633037 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:18 crc kubenswrapper[4836]: I0217 14:06:18.633049 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:18 crc kubenswrapper[4836]: W0217 14:06:18.867126 4836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:18 crc kubenswrapper[4836]: E0217 14:06:18.867276 4836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:18.990648 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.335370 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.384017 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.467333 4836 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.469837 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:16:26.711818581 +0000 UTC Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.571842 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.635480 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.637439 4836 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c1a52f71b0a88c92eb73fc0f119373f12587dd887e7dbb4a06dbb6e6d33d55c9" exitCode=255 Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.637492 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c1a52f71b0a88c92eb73fc0f119373f12587dd887e7dbb4a06dbb6e6d33d55c9"} Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.637556 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.638216 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.638251 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.638263 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.638805 4836 scope.go:117] "RemoveContainer" containerID="c1a52f71b0a88c92eb73fc0f119373f12587dd887e7dbb4a06dbb6e6d33d55c9" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.642025 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.642432 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16"} Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.642460 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07"} Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.642473 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995"} Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.642482 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8"} Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.642530 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.643070 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.643091 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.643098 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.643551 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.643567 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:19 crc kubenswrapper[4836]: I0217 14:06:19.643574 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.405108 4836 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.470086 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:15:12.274510797 +0000 UTC Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.648414 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b"} Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.648481 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.649552 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.649582 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.649594 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.650146 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.652621 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177"} Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.652676 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.652821 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.653850 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.653916 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.653935 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.654097 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.654132 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.654143 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.862985 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.957786 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.958990 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.959029 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.959039 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:20 crc kubenswrapper[4836]: I0217 14:06:20.959063 4836 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.199081 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.470571 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:13:05.369418191 +0000 UTC Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.654378 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.654451 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.654624 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.655401 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.655464 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.655481 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.655850 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.655921 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.655953 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.784193 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.784418 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.785684 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.785721 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.785733 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.990888 4836 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Feb 17 14:06:21 crc kubenswrapper[4836]: I0217 14:06:21.990990 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.470689 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:28:00.217074616 +0000 UTC Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.660759 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.660923 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.661832 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.661868 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.661878 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.662375 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.662403 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:22 crc kubenswrapper[4836]: I0217 14:06:22.662414 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:23 crc kubenswrapper[4836]: I0217 14:06:23.467230 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:23 crc kubenswrapper[4836]: I0217 14:06:23.467440 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:23 crc kubenswrapper[4836]: I0217 14:06:23.468686 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:23 crc kubenswrapper[4836]: I0217 14:06:23.468752 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:23 crc kubenswrapper[4836]: I0217 14:06:23.468771 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:23 crc kubenswrapper[4836]: I0217 14:06:23.471370 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:20:59.339888234 +0000 UTC Feb 17 14:06:24 crc kubenswrapper[4836]: I0217 14:06:24.472097 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:49:01.945698427 +0000 UTC Feb 17 14:06:24 crc kubenswrapper[4836]: E0217 14:06:24.654428 4836 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 14:06:24 crc kubenswrapper[4836]: I0217 14:06:24.830786 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 14:06:24 crc kubenswrapper[4836]: I0217 14:06:24.830963 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:24 crc kubenswrapper[4836]: I0217 14:06:24.831984 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:24 crc kubenswrapper[4836]: I0217 14:06:24.832018 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:24 crc kubenswrapper[4836]: I0217 14:06:24.832027 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:25 crc kubenswrapper[4836]: I0217 14:06:25.472366 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:20:57.464223835 +0000 UTC Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.437857 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.438115 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.440363 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.440489 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.440582 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.446153 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.473442 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:30:23.581972888 +0000 UTC Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.668895 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.670151 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.670227 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.670259 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:26 crc kubenswrapper[4836]: I0217 14:06:26.674167 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:27 crc kubenswrapper[4836]: I0217 14:06:27.473974 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:46:01.08478938 +0000 UTC Feb 17 14:06:27 crc kubenswrapper[4836]: I0217 14:06:27.670705 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:27 crc kubenswrapper[4836]: I0217 14:06:27.671785 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:27 crc kubenswrapper[4836]: I0217 14:06:27.671847 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:27 crc kubenswrapper[4836]: I0217 14:06:27.671864 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:28 crc kubenswrapper[4836]: I0217 14:06:28.474323 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:13:39.456549446 +0000 UTC Feb 17 14:06:29 crc kubenswrapper[4836]: I0217 14:06:29.335908 4836 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 14:06:29 crc kubenswrapper[4836]: I0217 14:06:29.335962 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 14:06:29 crc kubenswrapper[4836]: I0217 14:06:29.474524 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:25:45.412269269 +0000 UTC Feb 17 14:06:29 crc kubenswrapper[4836]: I0217 14:06:29.572229 4836 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 14:06:29 crc kubenswrapper[4836]: I0217 14:06:29.572309 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 14:06:30 crc kubenswrapper[4836]: I0217 14:06:30.263284 4836 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 14:06:30 crc kubenswrapper[4836]: I0217 14:06:30.263353 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 14:06:30 crc kubenswrapper[4836]: I0217 14:06:30.480083 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:51:45.167330691 +0000 UTC Feb 17 14:06:31 crc kubenswrapper[4836]: I0217 14:06:31.480592 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:36:08.755481393 +0000 UTC Feb 17 14:06:32 crc kubenswrapper[4836]: I0217 14:06:32.173923 4836 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Feb 17 14:06:32 crc kubenswrapper[4836]: I0217 14:06:32.174066 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Feb 17 14:06:32 crc kubenswrapper[4836]: I0217 14:06:32.481776 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:25:58.199976609 +0000 UTC Feb 17 14:06:33 crc kubenswrapper[4836]: I0217 14:06:33.482452 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:11:16.49054462 +0000 UTC Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.482573 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:30:00.234956783 +0000 UTC Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.581430 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.581771 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.584631 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.584697 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.584708 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.586756 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:34 crc kubenswrapper[4836]: E0217 14:06:34.654596 4836 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.693315 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.694160 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.694191 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.694201 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.886618 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.887337 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.888560 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.888614 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.888627 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:34 crc kubenswrapper[4836]: I0217 14:06:34.899529 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.268005 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.270424 4836 trace.go:236] Trace[1456549407]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 14:06:21.994) (total time: 13276ms): Feb 17 14:06:35 crc kubenswrapper[4836]: Trace[1456549407]: ---"Objects listed" error: 13276ms (14:06:35.270) Feb 17 14:06:35 crc kubenswrapper[4836]: Trace[1456549407]: [13.27613291s] [13.27613291s] END Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.270478 4836 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.270795 4836 trace.go:236] Trace[1113258187]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 14:06:23.149) (total time: 12120ms): Feb 17 14:06:35 crc kubenswrapper[4836]: Trace[1113258187]: ---"Objects listed" error: 12120ms (14:06:35.270) Feb 17 14:06:35 crc kubenswrapper[4836]: Trace[1113258187]: [12.120864197s] [12.120864197s] END Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.270826 4836 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.272131 4836 trace.go:236] Trace[1105507227]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 14:06:23.637) (total time: 11634ms): Feb 17 14:06:35 crc kubenswrapper[4836]: Trace[1105507227]: ---"Objects listed" error: 11634ms (14:06:35.272) Feb 17 14:06:35 crc kubenswrapper[4836]: Trace[1105507227]: [11.634940224s] [11.634940224s] END Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.272332 4836 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.272928 4836 trace.go:236] Trace[1684451451]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 14:06:22.037) (total time: 13234ms): Feb 17 14:06:35 crc kubenswrapper[4836]: Trace[1684451451]: ---"Objects listed" error: 13234ms (14:06:35.272) Feb 17 14:06:35 crc kubenswrapper[4836]: Trace[1684451451]: [13.234925026s] [13.234925026s] END Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.272958 4836 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.278947 4836 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.279107 4836 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.289469 4836 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.320025 4836 csr.go:261] certificate signing request csr-fn48j is approved, waiting to be issued Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.327093 4836 csr.go:257] certificate signing request csr-fn48j is issued Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.471724 4836 apiserver.go:52] "Watching apiserver" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.475067 4836 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.475682 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.476151 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.476571 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.477054 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.477054 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.477101 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.477309 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.477432 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.477501 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.477648 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.478208 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vt5sw"] Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.478549 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jlz6g"] Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.478680 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vt5sw" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.478817 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.480481 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.480778 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.481285 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.482966 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:04:27.304922284 +0000 UTC Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.486924 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.487310 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.487551 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.487750 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.487772 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.494030 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.494076 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.494930 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.499190 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.499209 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.499459 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.506443 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.520662 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.567253 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.569190 4836 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.578988 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.579619 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.579724 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.579803 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.579880 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.579945 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580012 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580082 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580149 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580234 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580330 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580411 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580479 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580549 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580604 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580622 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580711 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580740 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580764 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580784 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580808 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580829 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580852 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580877 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580899 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580922 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580944 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580965 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.580989 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581009 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581024 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581039 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581057 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581071 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581085 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581105 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581128 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581156 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581177 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581200 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581220 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581241 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581263 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581286 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581339 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581365 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581387 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581409 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581429 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581460 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581480 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581509 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581530 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581551 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581572 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581604 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581624 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581646 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581670 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581691 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581711 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581732 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581753 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581787 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581810 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581832 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.581852 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582010 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582036 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582067 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582088 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582112 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582134 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582152 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582171 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582187 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582204 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582220 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582237 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582264 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582310 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582332 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582350 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582367 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582383 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582398 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582415 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582432 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582447 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582463 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582478 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582492 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582506 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582524 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582539 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582558 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582574 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582589 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582606 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582622 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582659 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582678 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582713 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582728 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582744 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582760 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582776 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582792 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582808 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582824 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582844 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582861 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582878 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582895 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582910 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582925 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582945 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582962 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582978 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.582996 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583013 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583029 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583044 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583059 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583058 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583074 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583089 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583106 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583125 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583142 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583158 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583173 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583190 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583205 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583220 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583236 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583254 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583274 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583311 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583329 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583345 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584983 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583228 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583250 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.593704 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583311 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583324 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583414 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.583481 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:06:36.083446558 +0000 UTC m=+22.426374827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583513 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583599 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583723 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583737 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583772 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583821 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583920 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583947 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.583972 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584040 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584115 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584183 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584205 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584248 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584413 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584417 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584642 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584714 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584664 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584806 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584837 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584882 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.584929 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.585027 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.585080 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.585126 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.585193 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.586395 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.586899 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.587148 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.587147 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.587264 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.587348 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.587494 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.587789 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588107 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588160 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588394 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588416 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588457 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588632 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588743 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588866 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.588873 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589071 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589133 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589322 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589341 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589404 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589424 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589537 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589547 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589722 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589777 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.589977 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.590326 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.590360 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.590382 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.590414 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.590521 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.590766 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.590930 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.591033 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.591580 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.591658 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.591900 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592030 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592272 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592288 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592613 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592673 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592755 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592875 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592882 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.592405 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.593377 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594186 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.593496 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.593501 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594147 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594246 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594278 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594326 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594350 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594432 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594453 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594552 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594555 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594573 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594594 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594614 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594632 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594617 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594648 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594641 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594632 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594708 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594756 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594896 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595233 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595328 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595551 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595595 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595762 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595881 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595921 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595967 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.595965 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.594807 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596340 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596379 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596779 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596829 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596866 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596922 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596960 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596995 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.597046 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.597083 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596223 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596310 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596645 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596699 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.596731 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.597088 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.597648 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.598128 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.598162 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.598153 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.598426 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.598788 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599240 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599264 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599329 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.597118 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599391 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599400 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599411 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599636 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599660 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599671 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599430 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599750 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599701 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599870 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599899 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599913 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599918 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599948 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.599976 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600002 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600021 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600041 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600059 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600100 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600144 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600148 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600162 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600182 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600202 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600218 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600234 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600250 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600272 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600310 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600329 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600344 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600362 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600377 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600393 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600409 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600412 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600427 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600502 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600544 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-serviceca\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600569 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600592 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600610 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-host\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600628 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vbzg\" (UniqueName: \"kubernetes.io/projected/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-kube-api-access-8vbzg\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600653 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600676 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600700 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600724 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtsz\" (UniqueName: \"kubernetes.io/projected/f6d1f430-35ed-4c4e-a797-d7a0a5a45266-kube-api-access-kqtsz\") pod \"node-resolver-vt5sw\" (UID: \"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\") " pod="openshift-dns/node-resolver-vt5sw" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600746 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600769 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600775 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600790 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600819 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600840 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600869 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600895 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600919 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600937 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f6d1f430-35ed-4c4e-a797-d7a0a5a45266-hosts-file\") pod \"node-resolver-vt5sw\" (UID: \"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\") " pod="openshift-dns/node-resolver-vt5sw" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.600938 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601049 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601065 4836 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601077 4836 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601091 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601103 4836 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601115 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601128 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601142 4836 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601153 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601162 4836 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601173 4836 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601183 4836 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601194 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601208 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601221 4836 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601234 4836 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601248 4836 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601257 4836 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601266 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601275 4836 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601284 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601316 4836 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601321 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601329 4836 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601341 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601368 4836 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601388 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601404 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601416 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601428 4836 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601438 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601449 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601463 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601477 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601490 4836 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601660 4836 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601677 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601677 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601838 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601884 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601897 4836 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601909 4836 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.601993 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602011 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602025 4836 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602038 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602051 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602063 4836 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602081 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602094 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602106 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602119 4836 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602132 4836 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602145 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602159 4836 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602172 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602184 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602211 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602224 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602241 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602251 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602264 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602276 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602287 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602319 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602331 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602344 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602356 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602484 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602501 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602513 4836 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602526 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602537 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602550 4836 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602562 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602575 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602588 4836 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602604 4836 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602566 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602617 4836 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602719 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602734 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602746 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602756 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602766 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602776 4836 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602786 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602797 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602807 4836 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602819 4836 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602832 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602846 4836 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602863 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602876 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602889 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602906 4836 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602918 4836 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602931 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602945 4836 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602957 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602981 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602994 4836 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603006 4836 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603018 4836 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603029 4836 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603041 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603053 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603065 4836 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603078 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603090 4836 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603104 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603116 4836 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603131 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603145 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603160 4836 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603173 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603185 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603198 4836 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603210 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603223 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603236 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603249 4836 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603261 4836 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603272 4836 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603287 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603320 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603333 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603348 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603360 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603371 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602008 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602042 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602359 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602585 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602736 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.602961 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603390 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603051 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603123 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603469 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603184 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603621 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.603925 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.604147 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.604567 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.604632 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.604639 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.604908 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.605163 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.605231 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.605539 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.605818 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.606270 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.606455 4836 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.606579 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:36.106562249 +0000 UTC m=+22.449490518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.606644 4836 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.606683 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:36.106676812 +0000 UTC m=+22.449605081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.606686 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.607148 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.607223 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.607233 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.607641 4836 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.608664 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.608764 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.608989 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.609191 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.609181 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.609384 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.609741 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.610093 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.611389 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.611520 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.611989 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.612045 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.612368 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.615491 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.616250 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.617505 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.618176 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.619278 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.619405 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.619433 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.619790 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.619813 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.619848 4836 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.619876 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.619939 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:36.119919263 +0000 UTC m=+22.462847532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.620402 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.621680 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.621738 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.622404 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.622468 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.622492 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.622506 4836 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:35 crc kubenswrapper[4836]: E0217 14:06:35.622561 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:36.122541443 +0000 UTC m=+22.465469882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.622558 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.623132 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.623554 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.624490 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.624589 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.624836 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.626108 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.626881 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.627011 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.627056 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.627542 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.627796 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.629561 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.630968 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.631121 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.631626 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.635911 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.638329 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.639869 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.640511 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.649765 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.650122 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.652924 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.653824 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.659541 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.675548 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.692243 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703717 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqtsz\" (UniqueName: \"kubernetes.io/projected/f6d1f430-35ed-4c4e-a797-d7a0a5a45266-kube-api-access-kqtsz\") pod \"node-resolver-vt5sw\" (UID: \"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\") " pod="openshift-dns/node-resolver-vt5sw" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703792 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703820 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703843 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f6d1f430-35ed-4c4e-a797-d7a0a5a45266-hosts-file\") pod \"node-resolver-vt5sw\" (UID: \"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\") " pod="openshift-dns/node-resolver-vt5sw" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703868 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-serviceca\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703894 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-host\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703915 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vbzg\" (UniqueName: \"kubernetes.io/projected/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-kube-api-access-8vbzg\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703968 4836 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703983 4836 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.703995 4836 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704009 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704023 4836 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704036 4836 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704048 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704062 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704075 4836 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704085 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704096 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704111 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704124 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704120 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f6d1f430-35ed-4c4e-a797-d7a0a5a45266-hosts-file\") pod \"node-resolver-vt5sw\" (UID: \"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\") " pod="openshift-dns/node-resolver-vt5sw" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704136 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704148 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704331 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704369 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704401 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-host\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704419 4836 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704434 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704446 4836 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704459 4836 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704470 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704482 4836 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704495 4836 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704507 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704519 4836 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704541 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704565 4836 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704577 4836 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704589 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704600 4836 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704613 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704625 4836 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704637 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704647 4836 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704659 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704671 4836 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704681 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704692 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704703 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704713 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704725 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704737 4836 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704750 4836 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704762 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704774 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704786 4836 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704798 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704810 4836 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704821 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704832 4836 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704844 4836 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704873 4836 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704885 4836 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704899 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704919 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704930 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704943 4836 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704954 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704964 4836 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704975 4836 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704986 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.704998 4836 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.705008 4836 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.705018 4836 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.705028 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.705038 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.705050 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.705061 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.705072 4836 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.705107 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-serviceca\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.718698 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqtsz\" (UniqueName: \"kubernetes.io/projected/f6d1f430-35ed-4c4e-a797-d7a0a5a45266-kube-api-access-kqtsz\") pod \"node-resolver-vt5sw\" (UID: \"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\") " pod="openshift-dns/node-resolver-vt5sw" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.719376 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vbzg\" (UniqueName: \"kubernetes.io/projected/f87e91ef-e64c-45a5-9bd5-cc6537e51b1b-kube-api-access-8vbzg\") pod \"node-ca-jlz6g\" (UID: \"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\") " pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.721414 4836 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34420->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.721494 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34420->192.168.126.11:17697: read: connection reset by peer" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.720406 4836 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41092->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.722664 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41092->192.168.126.11:17697: read: connection reset by peer" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.723017 4836 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.723129 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.795557 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.806616 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 14:06:35 crc kubenswrapper[4836]: W0217 14:06:35.817411 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-fddcb96714784863e96abf5ed55db043ac0bfb9c2084ffe566e853311f983486 WatchSource:0}: Error finding container fddcb96714784863e96abf5ed55db043ac0bfb9c2084ffe566e853311f983486: Status 404 returned error can't find the container with id fddcb96714784863e96abf5ed55db043ac0bfb9c2084ffe566e853311f983486 Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.821175 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.831056 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vt5sw" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.836731 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jlz6g" Feb 17 14:06:35 crc kubenswrapper[4836]: W0217 14:06:35.857709 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6d1f430_35ed_4c4e_a797_d7a0a5a45266.slice/crio-2944fbb7f287893828c9c9b126d5f545dfd917e4c6229460a9aedb029850836e WatchSource:0}: Error finding container 2944fbb7f287893828c9c9b126d5f545dfd917e4c6229460a9aedb029850836e: Status 404 returned error can't find the container with id 2944fbb7f287893828c9c9b126d5f545dfd917e4c6229460a9aedb029850836e Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.863099 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.931394 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bkk9g"] Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.932350 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.936608 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.936638 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.936613 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.936842 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.937013 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.945021 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.958837 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.985169 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:35 crc kubenswrapper[4836]: I0217 14:06:35.995077 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.005799 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.006680 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/895a19c9-a3f0-4a15-aa19-19347121388c-proxy-tls\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.006733 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/895a19c9-a3f0-4a15-aa19-19347121388c-rootfs\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.006775 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/895a19c9-a3f0-4a15-aa19-19347121388c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.006792 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99tf9\" (UniqueName: \"kubernetes.io/projected/895a19c9-a3f0-4a15-aa19-19347121388c-kube-api-access-99tf9\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.017623 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.025988 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.042514 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.057285 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.068732 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.107550 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:06:37.107524241 +0000 UTC m=+23.450452510 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.107881 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.107986 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.108013 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.108040 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/895a19c9-a3f0-4a15-aa19-19347121388c-proxy-tls\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.108064 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/895a19c9-a3f0-4a15-aa19-19347121388c-rootfs\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.108087 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/895a19c9-a3f0-4a15-aa19-19347121388c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.108117 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99tf9\" (UniqueName: \"kubernetes.io/projected/895a19c9-a3f0-4a15-aa19-19347121388c-kube-api-access-99tf9\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.108577 4836 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.108630 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:37.10861273 +0000 UTC m=+23.451541049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.108694 4836 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.108797 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:37.108716302 +0000 UTC m=+23.451644571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.109615 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/895a19c9-a3f0-4a15-aa19-19347121388c-rootfs\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.110613 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/895a19c9-a3f0-4a15-aa19-19347121388c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.116358 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/895a19c9-a3f0-4a15-aa19-19347121388c-proxy-tls\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.128442 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99tf9\" (UniqueName: \"kubernetes.io/projected/895a19c9-a3f0-4a15-aa19-19347121388c-kube-api-access-99tf9\") pod \"machine-config-daemon-bkk9g\" (UID: \"895a19c9-a3f0-4a15-aa19-19347121388c\") " pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.208997 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.209033 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.219346 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.219397 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.219412 4836 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.219414 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.219454 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.219471 4836 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.219490 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:37.219469974 +0000 UTC m=+23.562398243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.219532 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:37.219511675 +0000 UTC m=+23.562439944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.257013 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.302171 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-c76cc"] Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.302551 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.308115 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.308401 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.308759 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.308889 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.309103 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.310012 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfznp"] Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.312154 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-t7845"] Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.312368 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.313169 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.322024 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.322450 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.322581 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.322689 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.322816 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.323047 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.323197 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.323197 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.328150 4836 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 14:01:35 +0000 UTC, rotation deadline is 2026-11-09 17:39:03.674999823 +0000 UTC Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.328205 4836 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6363h32m27.34679789s for next certificate rotation Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.331320 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.341865 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.366127 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: W0217 14:06:36.367632 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod895a19c9_a3f0_4a15_aa19_19347121388c.slice/crio-089eaed7b83958ef4cd6e49ca80a36b3b45719b2e4981b5ea960d68f4da80549 WatchSource:0}: Error finding container 089eaed7b83958ef4cd6e49ca80a36b3b45719b2e4981b5ea960d68f4da80549: Status 404 returned error can't find the container with id 089eaed7b83958ef4cd6e49ca80a36b3b45719b2e4981b5ea960d68f4da80549 Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.384573 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.408366 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410568 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-system-cni-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410604 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-cni-multus\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410632 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8vh\" (UniqueName: \"kubernetes.io/projected/592aa549-1b1b-441e-93e4-0821e05ff2b2-kube-api-access-jc8vh\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410654 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-etc-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410687 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-kubelet\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410710 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-slash\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410732 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410756 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3eeaa6bd-bab3-4310-9522-747924f2e825-cni-binary-copy\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410775 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdwb\" (UniqueName: \"kubernetes.io/projected/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-kube-api-access-7zdwb\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410793 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-log-socket\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410811 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-config\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410829 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-cnibin\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410851 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-os-release\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410877 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-etc-kubernetes\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410923 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grf7r\" (UniqueName: \"kubernetes.io/projected/3eeaa6bd-bab3-4310-9522-747924f2e825-kube-api-access-grf7r\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410944 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-cnibin\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410964 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-kubelet\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.410983 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-cni-bin\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411002 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-systemd-units\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411022 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411042 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-env-overrides\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411065 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-var-lib-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411086 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411106 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-node-log\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411145 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-multus-certs\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411167 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-script-lib\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411199 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-system-cni-dir\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411221 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/592aa549-1b1b-441e-93e4-0821e05ff2b2-cni-binary-copy\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411241 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-bin\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411262 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-netd\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411284 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-cni-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411331 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-ovn\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411353 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-socket-dir-parent\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411375 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-conf-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411406 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovn-node-metrics-cert\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411428 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3eeaa6bd-bab3-4310-9522-747924f2e825-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411446 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-hostroot\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411464 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-daemon-config\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411496 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-netns\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411516 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-netns\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411538 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411560 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-k8s-cni-cncf-io\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411584 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-systemd\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.411651 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-os-release\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.419023 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.442563 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.451903 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.462288 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.471313 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.481526 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.483523 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:26:56.999702702 +0000 UTC Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.512439 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-system-cni-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.512476 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-cni-multus\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.512739 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-system-cni-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.512827 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-cni-multus\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.512948 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-etc-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.512497 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-etc-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514006 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8vh\" (UniqueName: \"kubernetes.io/projected/592aa549-1b1b-441e-93e4-0821e05ff2b2-kube-api-access-jc8vh\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514039 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-kubelet\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514056 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-slash\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514072 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514090 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdwb\" (UniqueName: \"kubernetes.io/projected/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-kube-api-access-7zdwb\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514106 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3eeaa6bd-bab3-4310-9522-747924f2e825-cni-binary-copy\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514120 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-log-socket\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514057 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514134 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-config\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514165 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-cnibin\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514183 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-os-release\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514196 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-etc-kubernetes\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514227 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grf7r\" (UniqueName: \"kubernetes.io/projected/3eeaa6bd-bab3-4310-9522-747924f2e825-kube-api-access-grf7r\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514241 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-cnibin\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514255 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-cni-bin\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514269 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-kubelet\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514284 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-systemd-units\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514322 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514345 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-env-overrides\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514363 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-var-lib-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514380 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514395 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-node-log\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514414 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-multus-certs\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514431 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-script-lib\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514447 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-system-cni-dir\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514463 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/592aa549-1b1b-441e-93e4-0821e05ff2b2-cni-binary-copy\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514479 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-ovn\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514494 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-bin\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514509 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-netd\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514524 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-cni-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514539 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-socket-dir-parent\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514554 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-conf-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514576 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovn-node-metrics-cert\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514590 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3eeaa6bd-bab3-4310-9522-747924f2e825-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514605 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-hostroot\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514619 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-daemon-config\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514633 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-netns\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514647 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-netns\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514663 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514678 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-k8s-cni-cncf-io\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514694 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-systemd\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514710 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-os-release\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.514953 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-config\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515023 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-os-release\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515066 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-cnibin\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515100 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-os-release\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515121 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-etc-kubernetes\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515170 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-kubelet\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515200 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-slash\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515223 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515456 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-cnibin\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515495 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-cni-bin\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515528 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-var-lib-kubelet\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515558 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-systemd-units\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.515582 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516006 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-env-overrides\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516050 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-var-lib-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516071 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-openvswitch\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516092 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-node-log\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516114 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-multus-certs\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516567 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-script-lib\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516612 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-system-cni-dir\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516760 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-netns\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.516838 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-hostroot\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517152 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3eeaa6bd-bab3-4310-9522-747924f2e825-cni-binary-copy\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517223 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-log-socket\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517193 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/592aa549-1b1b-441e-93e4-0821e05ff2b2-cni-binary-copy\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517272 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-host-run-k8s-cni-cncf-io\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517277 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-systemd\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517315 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3eeaa6bd-bab3-4310-9522-747924f2e825-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517461 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-cni-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517489 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-ovn\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517504 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-socket-dir-parent\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517510 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-bin\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517529 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-netd\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517547 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-conf-dir\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517575 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-netns\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.517591 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/592aa549-1b1b-441e-93e4-0821e05ff2b2-multus-daemon-config\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.518156 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3eeaa6bd-bab3-4310-9522-747924f2e825-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.520935 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovn-node-metrics-cert\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.545278 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.545892 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8vh\" (UniqueName: \"kubernetes.io/projected/592aa549-1b1b-441e-93e4-0821e05ff2b2-kube-api-access-jc8vh\") pod \"multus-c76cc\" (UID: \"592aa549-1b1b-441e-93e4-0821e05ff2b2\") " pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.552424 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grf7r\" (UniqueName: \"kubernetes.io/projected/3eeaa6bd-bab3-4310-9522-747924f2e825-kube-api-access-grf7r\") pod \"multus-additional-cni-plugins-t7845\" (UID: \"3eeaa6bd-bab3-4310-9522-747924f2e825\") " pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.560026 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.561339 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdwb\" (UniqueName: \"kubernetes.io/projected/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-kube-api-access-7zdwb\") pod \"ovnkube-node-gfznp\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.573881 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.574575 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.576174 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.576904 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.578325 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.578948 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.579708 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.581046 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.581951 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.584758 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.585812 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.586756 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.588079 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.588805 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.589495 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.589676 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.591413 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.592265 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.593284 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.593947 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.594608 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.597809 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.598706 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.599223 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.600470 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.602313 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.603731 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.604503 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.605496 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.606036 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.606859 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.607341 4836 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.607445 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.609739 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.610521 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.610968 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.612648 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.613817 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.614436 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.615952 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.616679 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.617221 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.617889 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.618050 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c76cc" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.619285 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.620335 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.620871 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.621921 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.622536 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.624169 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.624780 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.625733 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.626310 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.626924 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.629192 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.629982 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.627814 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.634945 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-t7845" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.679675 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.688860 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.697917 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.703066 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.703120 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.703136 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fddcb96714784863e96abf5ed55db043ac0bfb9c2084ffe566e853311f983486"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.703793 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.705617 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.709360 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.710953 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.711028 4836 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177" exitCode=255 Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.711145 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.711207 4836 scope.go:117] "RemoveContainer" containerID="c1a52f71b0a88c92eb73fc0f119373f12587dd887e7dbb4a06dbb6e6d33d55c9" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.715110 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"47d2c13bf3e4d71fa10e400c6e38b31f9db6c3c21c8413e9a420649f4d4cfa4d"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.730950 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.731256 4836 scope.go:117] "RemoveContainer" containerID="a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177" Feb 17 14:06:36 crc kubenswrapper[4836]: E0217 14:06:36.731553 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.731688 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.740484 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.740520 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"44528f3c87bd92c020dcd61eef6bdf96440546a99cb2ad727c4d12c7b41ccd2c"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.741871 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c76cc" event={"ID":"592aa549-1b1b-441e-93e4-0821e05ff2b2","Type":"ContainerStarted","Data":"dc193c55466e5525297bac82ed721d0f68341a9e983b8e349a3c89ceb9e53ab7"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.743074 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.743099 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"089eaed7b83958ef4cd6e49ca80a36b3b45719b2e4981b5ea960d68f4da80549"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.745553 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jlz6g" event={"ID":"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b","Type":"ContainerStarted","Data":"e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.745595 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jlz6g" event={"ID":"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b","Type":"ContainerStarted","Data":"6385b33f5ca92c383a9528f58e9bd8e4f9699b58d5246ef9d523a8df5756f25e"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.750808 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vt5sw" event={"ID":"f6d1f430-35ed-4c4e-a797-d7a0a5a45266","Type":"ContainerStarted","Data":"25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.750844 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vt5sw" event={"ID":"f6d1f430-35ed-4c4e-a797-d7a0a5a45266","Type":"ContainerStarted","Data":"2944fbb7f287893828c9c9b126d5f545dfd917e4c6229460a9aedb029850836e"} Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.761194 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.773064 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.790762 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: W0217 14:06:36.800028 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e8cda7_ec53_43bd_9fec_8ac4d6ecc26e.slice/crio-3bdc7f19fb50c4c29fda01e2e231206d0048a98eab720a4ee93274d360c514d1 WatchSource:0}: Error finding container 3bdc7f19fb50c4c29fda01e2e231206d0048a98eab720a4ee93274d360c514d1: Status 404 returned error can't find the container with id 3bdc7f19fb50c4c29fda01e2e231206d0048a98eab720a4ee93274d360c514d1 Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.806670 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.818167 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.829395 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.842565 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.853878 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.871659 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.885413 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.902053 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.915053 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1a52f71b0a88c92eb73fc0f119373f12587dd887e7dbb4a06dbb6e6d33d55c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:19Z\\\",\\\"message\\\":\\\"W0217 14:06:18.530858 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 14:06:18.531266 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771337178 cert, and key in /tmp/serving-cert-770380950/serving-signer.crt, /tmp/serving-cert-770380950/serving-signer.key\\\\nI0217 14:06:18.912509 1 observer_polling.go:159] Starting file observer\\\\nW0217 14:06:18.916420 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 14:06:18.918412 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:18.920068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-770380950/tls.crt::/tmp/serving-cert-770380950/tls.key\\\\\\\"\\\\nF0217 14:06:19.125256 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.930038 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.939645 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.948160 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.954402 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.972750 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:36 crc kubenswrapper[4836]: I0217 14:06:36.987403 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.119398 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.119562 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:06:39.11953413 +0000 UTC m=+25.462462399 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.119911 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.120026 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.120157 4836 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.120209 4836 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.120372 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:39.120338101 +0000 UTC m=+25.463266560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.120401 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:39.120390802 +0000 UTC m=+25.463319291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.221166 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.221217 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.221405 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.221426 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.221439 4836 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.221497 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:39.221481919 +0000 UTC m=+25.564410188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.221552 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.221588 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.221600 4836 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.221653 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:39.221636953 +0000 UTC m=+25.564565222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.483882 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:54:47.830613584 +0000 UTC Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.567173 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.567451 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.567188 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.567568 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.567777 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.567890 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.755104 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c76cc" event={"ID":"592aa549-1b1b-441e-93e4-0821e05ff2b2","Type":"ContainerStarted","Data":"d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc"} Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.757100 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.760036 4836 scope.go:117] "RemoveContainer" containerID="a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177" Feb 17 14:06:37 crc kubenswrapper[4836]: E0217 14:06:37.760373 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.760602 4836 generic.go:334] "Generic (PLEG): container finished" podID="3eeaa6bd-bab3-4310-9522-747924f2e825" containerID="b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55" exitCode=0 Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.760687 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerDied","Data":"b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55"} Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.760740 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerStarted","Data":"d19db0389ac58eebb9dc83dd13a3b70c233c32490fb0080720176b6469910e22"} Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.762402 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9" exitCode=0 Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.762489 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.762524 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"3bdc7f19fb50c4c29fda01e2e231206d0048a98eab720a4ee93274d360c514d1"} Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.768016 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b"} Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.774627 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.788011 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.802701 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.818084 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.830478 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.844692 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.884580 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.925145 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1a52f71b0a88c92eb73fc0f119373f12587dd887e7dbb4a06dbb6e6d33d55c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:19Z\\\",\\\"message\\\":\\\"W0217 14:06:18.530858 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 14:06:18.531266 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771337178 cert, and key in /tmp/serving-cert-770380950/serving-signer.crt, /tmp/serving-cert-770380950/serving-signer.key\\\\nI0217 14:06:18.912509 1 observer_polling.go:159] Starting file observer\\\\nW0217 14:06:18.916420 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 14:06:18.918412 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:18.920068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-770380950/tls.crt::/tmp/serving-cert-770380950/tls.key\\\\\\\"\\\\nF0217 14:06:19.125256 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.950500 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.965032 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.977501 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:37 crc kubenswrapper[4836]: I0217 14:06:37.989052 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:37Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.011353 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.031974 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.044180 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.062958 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.081509 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.096329 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.113004 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.128953 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.143335 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.178130 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.191635 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.204534 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.216449 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.251374 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.266946 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.282386 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.491869 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:30:30.029718158 +0000 UTC Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.773816 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerStarted","Data":"14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4"} Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.777453 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.777520 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.777534 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.777544 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.793167 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.810157 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.821948 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.844332 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.857814 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.895850 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.910843 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.923410 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.938283 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.952601 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.977414 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.989185 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:38Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:38 crc kubenswrapper[4836]: I0217 14:06:38.995013 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.000051 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.005719 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.005879 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.023246 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.035975 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.060341 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.070285 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.090208 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.102899 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.118277 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.128030 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.136233 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.142290 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.142410 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:06:43.142390678 +0000 UTC m=+29.485318947 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.142505 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.142563 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.142583 4836 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.142651 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:43.142633005 +0000 UTC m=+29.485561344 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.142667 4836 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.142698 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:43.142692046 +0000 UTC m=+29.485620315 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.146567 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.155113 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.172223 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.187442 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.204941 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.220566 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.237852 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.243101 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.243145 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.243277 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.243312 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.243322 4836 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.243369 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:43.243356081 +0000 UTC m=+29.586284350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.243383 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.243414 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.243427 4836 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.243481 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:43.243465684 +0000 UTC m=+29.586393953 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.492677 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:37:22.158181485 +0000 UTC Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.567552 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.567731 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.568142 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.568257 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.568364 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.568441 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.786979 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.787035 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.789089 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d"} Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.791421 4836 generic.go:334] "Generic (PLEG): container finished" podID="3eeaa6bd-bab3-4310-9522-747924f2e825" containerID="14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4" exitCode=0 Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.792184 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerDied","Data":"14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4"} Feb 17 14:06:39 crc kubenswrapper[4836]: E0217 14:06:39.798721 4836 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.810483 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.830416 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.845534 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.861278 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.877788 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.892055 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.906087 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.921430 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.933857 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.946173 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.955914 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.977128 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:39 crc kubenswrapper[4836]: I0217 14:06:39.994003 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:39Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.016979 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.027664 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.040037 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.052509 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.071923 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.090076 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.103453 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.122632 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.137470 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.151031 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.162792 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.173973 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.189978 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.200534 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.221864 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.261443 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.302571 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.493090 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:00:13.625338251 +0000 UTC Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.803674 4836 generic.go:334] "Generic (PLEG): container finished" podID="3eeaa6bd-bab3-4310-9522-747924f2e825" containerID="dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c" exitCode=0 Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.803783 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerDied","Data":"dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c"} Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.821472 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.834023 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.845770 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.856689 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.880994 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.894600 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.907105 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.918492 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.932367 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.944479 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.967740 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.985855 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:40 crc kubenswrapper[4836]: I0217 14:06:40.998403 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:40Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.010775 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.022383 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.047386 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.048134 4836 scope.go:117] "RemoveContainer" containerID="a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177" Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.048363 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.494124 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:02:55.742413648 +0000 UTC Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.567530 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.567584 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.567662 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.567774 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.567905 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.568053 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.679661 4836 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.682537 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.682588 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.682605 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.682732 4836 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.694574 4836 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.694937 4836 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.696677 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.696725 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.696745 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.696769 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.696787 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:41Z","lastTransitionTime":"2026-02-17T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.714419 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.718035 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.718068 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.718080 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.718095 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.718107 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:41Z","lastTransitionTime":"2026-02-17T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.729387 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.732929 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.732958 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.732965 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.732979 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.732988 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:41Z","lastTransitionTime":"2026-02-17T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.744433 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.748130 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.748160 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.748170 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.748186 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.748198 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:41Z","lastTransitionTime":"2026-02-17T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.759280 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.762241 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.762277 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.762305 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.762323 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.762335 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:41Z","lastTransitionTime":"2026-02-17T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.773959 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: E0217 14:06:41.774138 4836 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.775730 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.775765 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.775779 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.775798 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.775812 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:41Z","lastTransitionTime":"2026-02-17T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.809968 4836 generic.go:334] "Generic (PLEG): container finished" podID="3eeaa6bd-bab3-4310-9522-747924f2e825" containerID="b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7" exitCode=0 Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.810095 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerDied","Data":"b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7"} Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.815957 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.826057 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.839726 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.850757 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.871523 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.878979 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.879081 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.879143 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.879168 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.879227 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:41Z","lastTransitionTime":"2026-02-17T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.885679 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.900093 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.912946 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.925484 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.935653 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.956759 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.968889 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.981491 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.982033 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.982064 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.982075 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.982091 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.982103 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:41Z","lastTransitionTime":"2026-02-17T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:41 crc kubenswrapper[4836]: I0217 14:06:41.992609 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:41Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.003600 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.017369 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.085725 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.086033 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.086046 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.086063 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.086078 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.188088 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.188166 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.188176 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.188192 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.188202 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.290526 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.290598 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.290620 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.290650 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.290672 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.393694 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.393740 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.393752 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.393769 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.393781 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.494397 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:19:37.1478885 +0000 UTC Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.496085 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.496279 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.496612 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.496740 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.496854 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.599676 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.599718 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.599727 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.599743 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.599752 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.702451 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.702490 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.702498 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.702511 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.702521 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.805606 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.805649 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.805659 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.805677 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.805690 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.820223 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerStarted","Data":"2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.833707 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.844816 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.855703 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.866157 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.877044 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.895008 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.907667 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.908555 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.908587 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.908599 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.908615 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.908627 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:42Z","lastTransitionTime":"2026-02-17T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.918912 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.929719 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.941420 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.950428 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.967872 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.981037 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:42 crc kubenswrapper[4836]: I0217 14:06:42.993331 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:42Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.003819 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.010411 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.010444 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.010456 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.010471 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.010483 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.112988 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.113029 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.113038 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.113052 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.113061 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.183716 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.183857 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.183912 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:06:51.183876672 +0000 UTC m=+37.526804981 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.184012 4836 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.184046 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.184075 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:51.184058017 +0000 UTC m=+37.526986316 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.184219 4836 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.184327 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:51.184307624 +0000 UTC m=+37.527235883 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.215821 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.215872 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.215888 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.215914 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.215931 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.285128 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.285199 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.285428 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.285433 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.285455 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.285478 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.285484 4836 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.285497 4836 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.285567 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:51.285545654 +0000 UTC m=+37.628473953 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.285593 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 14:06:51.285581985 +0000 UTC m=+37.628510284 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.319325 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.319387 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.319406 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.319430 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.319446 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.426583 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.426846 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.426920 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.426999 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.427070 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.495324 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:32:36.286672043 +0000 UTC Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.533149 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.533211 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.533247 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.533264 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.533276 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.567981 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.568116 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.568528 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.568586 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.568622 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:43 crc kubenswrapper[4836]: E0217 14:06:43.568658 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.636141 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.636188 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.636205 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.636225 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.636238 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.739962 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.740028 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.740040 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.740061 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.740075 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.826354 4836 generic.go:334] "Generic (PLEG): container finished" podID="3eeaa6bd-bab3-4310-9522-747924f2e825" containerID="2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0" exitCode=0 Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.826404 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerDied","Data":"2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.832486 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.832809 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.843059 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.843095 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.843105 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.843120 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.843131 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.848157 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.864696 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.874133 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.878315 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.893110 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.905602 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.929128 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.943862 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.945600 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.945641 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.945649 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.945665 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.945674 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:43Z","lastTransitionTime":"2026-02-17T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.959847 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.973761 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:43 crc kubenswrapper[4836]: I0217 14:06:43.984872 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.002391 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.014502 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.024740 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.035960 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.043718 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.048889 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.048925 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.048936 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.048951 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.048966 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.054022 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.070511 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.095597 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.111268 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.126625 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.142505 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.150947 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.150979 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.150991 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.151008 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.151020 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.164625 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.177410 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.192478 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.215875 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.237327 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.251927 4836 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.253974 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-58b4c7f79c-55gtf/status\": read tcp 38.102.83.233:43382->38.102.83.233:6443: use of closed network connection" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.275742 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.275778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.275786 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.275799 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.275807 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.294336 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.312201 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.323222 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.377774 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.377827 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.377847 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.377872 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.377890 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.480353 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.480388 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.480396 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.480410 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.480422 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.496107 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:46:24.682049718 +0000 UTC Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.583420 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.583469 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.583480 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.583499 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.583511 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.601559 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.644784 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.670960 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.685826 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.685874 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.685886 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.685904 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.685916 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.688390 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.699537 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.721031 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.735738 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.751324 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.763264 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.774598 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.789256 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.789325 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.789340 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.789360 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.789374 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.794714 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.810798 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.824675 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.840869 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.849249 4836 generic.go:334] "Generic (PLEG): container finished" podID="3eeaa6bd-bab3-4310-9522-747924f2e825" containerID="8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b" exitCode=0 Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.849319 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerDied","Data":"8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.849626 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.849644 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.857529 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.876931 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.925100 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.925155 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.925170 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.925191 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.925208 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:44Z","lastTransitionTime":"2026-02-17T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.929669 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.929807 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.951930 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.970455 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:44 crc kubenswrapper[4836]: I0217 14:06:44.982168 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.005236 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.021783 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.028982 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.029031 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.029043 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.029060 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.029072 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.035168 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.053370 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.068367 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.079214 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.096821 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.123053 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.131048 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.131078 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.131087 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.131105 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.131139 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.137022 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.150147 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.165958 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.180041 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.192152 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.212269 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.233873 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.235143 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.235211 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.235223 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.235246 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.235262 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.249793 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.263380 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.280667 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.294098 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.319404 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.335790 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.337335 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.337381 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.337391 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.337405 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.337413 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.353094 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.373717 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.387742 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.403058 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.440715 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.440774 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.440787 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.440812 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.440829 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.496756 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:21:12.641158485 +0000 UTC Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.544429 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.544491 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.544504 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.544526 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.544539 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.567797 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.567870 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:45 crc kubenswrapper[4836]: E0217 14:06:45.567935 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.567951 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:45 crc kubenswrapper[4836]: E0217 14:06:45.568103 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:45 crc kubenswrapper[4836]: E0217 14:06:45.568180 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.647208 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.647256 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.647268 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.647284 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.647309 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.750386 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.750471 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.750482 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.750496 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.750506 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.852485 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.852527 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.852541 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.852560 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.852575 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.857963 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" event={"ID":"3eeaa6bd-bab3-4310-9522-747924f2e825","Type":"ContainerStarted","Data":"54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.881202 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.892280 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.903791 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.916178 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.928415 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.948843 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.954801 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.954856 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.954866 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.954882 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.954892 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:45Z","lastTransitionTime":"2026-02-17T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.966888 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.982557 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:45 crc kubenswrapper[4836]: I0217 14:06:45.994412 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:45Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.013517 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:46Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.026115 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:46Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.042531 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:46Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.054823 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:46Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.056758 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.056793 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.056805 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.056822 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.056835 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.068131 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:46Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.078432 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:46Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.159809 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.159854 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.159864 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.159880 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.159893 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.262402 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.262476 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.262494 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.262521 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.262540 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.365424 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.365471 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.365484 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.365503 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.365515 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.468500 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.468561 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.468576 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.468600 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.468615 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.497878 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:50:39.598494518 +0000 UTC Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.571388 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.571455 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.571480 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.571500 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.571513 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.674201 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.674240 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.674249 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.674262 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.674274 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.777564 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.778081 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.778179 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.778288 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.778390 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.961966 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.962007 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.962015 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.962031 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:46 crc kubenswrapper[4836]: I0217 14:06:46.962040 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:46Z","lastTransitionTime":"2026-02-17T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.066415 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.066455 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.066468 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.066481 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.066490 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.168495 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.168560 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.168582 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.168607 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.168625 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.270963 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.271004 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.271013 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.271028 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.271036 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.373542 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.373589 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.373605 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.373627 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.373643 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.476801 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.477102 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.477208 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.477367 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.477444 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.499590 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:16:53.359093644 +0000 UTC Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.567114 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:47 crc kubenswrapper[4836]: E0217 14:06:47.567518 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.567176 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:47 crc kubenswrapper[4836]: E0217 14:06:47.568079 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.567132 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:47 crc kubenswrapper[4836]: E0217 14:06:47.568374 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.579091 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.579386 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.579497 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.579572 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.579684 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.682144 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.682439 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.682708 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.682793 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.682867 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.785751 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.785797 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.785808 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.785832 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.785844 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.889474 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.889549 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.889566 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.889601 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.889617 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.992273 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.992321 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.992332 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.992348 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:47 crc kubenswrapper[4836]: I0217 14:06:47.992359 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:47Z","lastTransitionTime":"2026-02-17T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.095575 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.095644 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.095658 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.095681 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.095699 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.199049 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.199099 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.199113 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.199131 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.199145 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.301856 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.301915 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.301929 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.301948 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.301961 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.405045 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.405467 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.405619 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.405766 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.405902 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.470777 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8"] Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.471330 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: W0217 14:06:48.473233 4836 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: secrets "ovn-control-plane-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 17 14:06:48 crc kubenswrapper[4836]: E0217 14:06:48.473273 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-control-plane-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:06:48 crc kubenswrapper[4836]: W0217 14:06:48.473356 4836 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 17 14:06:48 crc kubenswrapper[4836]: E0217 14:06:48.473371 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.480194 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.480271 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6dh2\" (UniqueName: \"kubernetes.io/projected/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-kube-api-access-j6dh2\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.480324 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.480458 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.484137 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.499936 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:56:51.906011823 +0000 UTC Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.500011 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.508314 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.508373 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.508389 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.508410 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.508425 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.514018 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.526539 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.551382 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.576715 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.581543 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.581624 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.581683 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6dh2\" (UniqueName: \"kubernetes.io/projected/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-kube-api-access-j6dh2\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.581720 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.582457 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.582497 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.597244 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.607245 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6dh2\" (UniqueName: \"kubernetes.io/projected/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-kube-api-access-j6dh2\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.612545 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.612596 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.612612 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.612636 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.612651 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.614927 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.632178 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.647435 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.677232 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.695129 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.715267 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.715329 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.715339 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.715359 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.715370 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.715830 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.729116 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.744769 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.759555 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.817420 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.817458 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.817467 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.817481 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.817490 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.869273 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/0.log" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.871586 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5" exitCode=1 Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.871622 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.872508 4836 scope.go:117] "RemoveContainer" containerID="efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.888505 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.900564 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.911305 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.924528 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.924580 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.924590 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.924608 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.924620 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:48Z","lastTransitionTime":"2026-02-17T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.934025 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.947102 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.959866 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.973156 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.986277 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:48 crc kubenswrapper[4836]: I0217 14:06:48.997759 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:48Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.022667 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"flector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167820 6099 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167887 6099 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.168346 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 14:06:48.168371 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 14:06:48.168384 6099 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 14:06:48.168403 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 14:06:48.168427 6099 factory.go:656] Stopping watch factory\\\\nI0217 14:06:48.168443 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 14:06:48.168450 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 14:06:48.168457 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 14:06:48.168463 6099 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.027026 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.027072 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.027085 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.027102 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.027116 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.039321 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.055981 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.070217 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.084615 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.099862 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.111449 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.129352 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.129390 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.129399 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.129412 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.129420 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.231531 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.231810 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.231901 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.231994 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.232112 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.289698 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.301162 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7nmc8\" (UID: \"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.335124 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.335185 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.335203 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.335226 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.335245 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.438535 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.438615 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.438627 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.438653 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.438669 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.500380 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:12:28.745568249 +0000 UTC Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.541160 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.541212 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.541224 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.541247 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.541259 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.568007 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.568035 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.568086 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:49 crc kubenswrapper[4836]: E0217 14:06:49.568177 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:49 crc kubenswrapper[4836]: E0217 14:06:49.568342 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:49 crc kubenswrapper[4836]: E0217 14:06:49.568518 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.643654 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.643689 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.643698 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.643714 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.643724 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.746262 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.746331 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.746343 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.746357 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.746367 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.849140 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.849196 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.849207 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.849225 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.849238 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.877840 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/1.log" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.878535 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/0.log" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.881557 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f" exitCode=1 Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.881624 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.881682 4836 scope.go:117] "RemoveContainer" containerID="efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.882827 4836 scope.go:117] "RemoveContainer" containerID="b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f" Feb 17 14:06:49 crc kubenswrapper[4836]: E0217 14:06:49.883089 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.901783 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.919210 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.934903 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.944787 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-c4txt"] Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.945352 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:49 crc kubenswrapper[4836]: E0217 14:06:49.945425 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.951961 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.952339 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.952399 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.952411 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.952432 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.952444 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:49Z","lastTransitionTime":"2026-02-17T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.967939 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.972211 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.982052 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.998532 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:49Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.998946 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78bt\" (UniqueName: \"kubernetes.io/projected/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-kube-api-access-g78bt\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:49 crc kubenswrapper[4836]: I0217 14:06:49.999230 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.013254 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.026642 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.041903 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.056647 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.056778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.056836 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.056849 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.056871 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.056886 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.067542 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.087414 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"flector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167820 6099 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167887 6099 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.168346 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 14:06:48.168371 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 14:06:48.168384 6099 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 14:06:48.168403 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 14:06:48.168427 6099 factory.go:656] Stopping watch factory\\\\nI0217 14:06:48.168443 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 14:06:48.168450 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 14:06:48.168457 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 14:06:48.168463 6099 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"rics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 14:06:49.751267 6265 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 14:06:49.751271 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vt5sw\\\\nI0217 14:06:49.751281 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751311 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751325 6265 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0217 14:06:49.751329 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0217 14:06:49.751338 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0217 14:06:49.751346 6265 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.100765 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78bt\" (UniqueName: \"kubernetes.io/projected/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-kube-api-access-g78bt\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.100895 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:50 crc kubenswrapper[4836]: E0217 14:06:50.101074 4836 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:50 crc kubenswrapper[4836]: E0217 14:06:50.101230 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs podName:8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c nodeName:}" failed. No retries permitted until 2026-02-17 14:06:50.601200393 +0000 UTC m=+36.944128662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs") pod "network-metrics-daemon-c4txt" (UID: "8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.102980 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.118025 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.120125 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78bt\" (UniqueName: \"kubernetes.io/projected/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-kube-api-access-g78bt\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.135684 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.146351 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.158200 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.159668 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.159697 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.159705 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.159719 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.159730 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.175896 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.189004 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.201415 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.213852 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.225964 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.238414 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.250064 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.263076 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.263132 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.263142 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.263155 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.263163 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.267801 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"flector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167820 6099 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167887 6099 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.168346 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 14:06:48.168371 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 14:06:48.168384 6099 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 14:06:48.168403 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 14:06:48.168427 6099 factory.go:656] Stopping watch factory\\\\nI0217 14:06:48.168443 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 14:06:48.168450 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 14:06:48.168457 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 14:06:48.168463 6099 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"rics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 14:06:49.751267 6265 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 14:06:49.751271 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vt5sw\\\\nI0217 14:06:49.751281 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751311 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751325 6265 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0217 14:06:49.751329 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0217 14:06:49.751338 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0217 14:06:49.751346 6265 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.279516 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.292907 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.304135 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.316846 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.328027 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.337994 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.348816 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.362719 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.365396 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.365467 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.365490 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.365523 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.365547 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.468018 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.468084 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.468098 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.468110 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.468121 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.500656 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:26:52.995139649 +0000 UTC Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.574112 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.574176 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.574194 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.574213 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.574229 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.606225 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:50 crc kubenswrapper[4836]: E0217 14:06:50.606524 4836 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:50 crc kubenswrapper[4836]: E0217 14:06:50.606727 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs podName:8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c nodeName:}" failed. No retries permitted until 2026-02-17 14:06:51.606691804 +0000 UTC m=+37.949620223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs") pod "network-metrics-daemon-c4txt" (UID: "8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.677056 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.677097 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.677107 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.677123 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.677134 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.780358 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.780403 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.780420 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.780436 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.780448 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.882957 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.882994 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.883005 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.883021 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.883032 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.888663 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/1.log" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.894315 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" event={"ID":"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a","Type":"ContainerStarted","Data":"61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.894366 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" event={"ID":"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a","Type":"ContainerStarted","Data":"ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.894382 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" event={"ID":"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a","Type":"ContainerStarted","Data":"9008936532e387fa5da7596a6b296d2fa252df36d3f613f719407f505026a1e4"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.912287 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.927974 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.938894 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.949505 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.962359 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.977055 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.986257 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.986315 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.986327 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.986370 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.986385 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:50Z","lastTransitionTime":"2026-02-17T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:50 crc kubenswrapper[4836]: I0217 14:06:50.987880 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:50Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.006704 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.028476 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.049379 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.065986 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.085190 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.089410 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.089436 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.089443 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.089455 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.089464 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.098073 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.114724 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"flector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167820 6099 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167887 6099 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.168346 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 14:06:48.168371 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 14:06:48.168384 6099 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 14:06:48.168403 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 14:06:48.168427 6099 factory.go:656] Stopping watch factory\\\\nI0217 14:06:48.168443 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 14:06:48.168450 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 14:06:48.168457 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 14:06:48.168463 6099 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"rics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 14:06:49.751267 6265 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 14:06:49.751271 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vt5sw\\\\nI0217 14:06:49.751281 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751311 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751325 6265 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0217 14:06:49.751329 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0217 14:06:49.751338 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0217 14:06:49.751346 6265 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.126334 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.138142 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.189611 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.193517 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.193589 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.193599 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.193614 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.193630 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.211737 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.211882 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.212026 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:07:07.211996518 +0000 UTC m=+53.554924797 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.212086 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.212124 4836 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.212277 4836 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.212319 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:07:07.212305146 +0000 UTC m=+53.555233415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.212364 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:07:07.212346617 +0000 UTC m=+53.555274886 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.296932 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.296996 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.297012 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.297030 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.297041 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.313607 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.313661 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.313828 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.313867 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.313900 4836 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.313945 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 14:07:07.313932427 +0000 UTC m=+53.656860696 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.313828 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.314032 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.314056 4836 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.314138 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 14:07:07.314116822 +0000 UTC m=+53.657045151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.400108 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.400273 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.400408 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.400447 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.400470 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.500886 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:24:16.473979782 +0000 UTC Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.503666 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.503705 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.503714 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.503728 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.503739 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.567951 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.568062 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.567951 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.567990 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.568284 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.568364 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.568426 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.568481 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.605924 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.605974 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.605986 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.606004 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.606017 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.617698 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.617921 4836 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.618052 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs podName:8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c nodeName:}" failed. No retries permitted until 2026-02-17 14:06:53.618019186 +0000 UTC m=+39.960947515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs") pod "network-metrics-daemon-c4txt" (UID: "8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.750827 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.750874 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.750882 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.750896 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.750906 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.852271 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.852342 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.852353 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.852369 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.852395 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.872354 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.876371 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.876466 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.876489 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.876516 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.876541 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.888181 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.891871 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.891931 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.891947 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.891968 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.892179 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.904479 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.908561 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.908591 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.908599 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.908612 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.908622 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.920019 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.923459 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.923497 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.923505 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.923519 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.923552 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.937080 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:51Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:51 crc kubenswrapper[4836]: E0217 14:06:51.937250 4836 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.938926 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.938965 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.938974 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.938988 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:51 crc kubenswrapper[4836]: I0217 14:06:51.938999 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:51Z","lastTransitionTime":"2026-02-17T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.042505 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.042584 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.042602 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.042625 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.042642 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.145015 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.145065 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.145076 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.145092 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.145111 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.247607 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.247646 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.247655 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.247667 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.247676 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.349914 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.349972 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.349981 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.349995 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.350005 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.453839 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.453920 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.453943 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.453973 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.453995 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.501963 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:29:06.084279604 +0000 UTC Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.556945 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.556998 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.557010 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.557024 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.557033 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.660244 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.660308 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.660318 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.660331 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.660348 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.763203 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.763248 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.763260 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.763275 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.763285 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.867165 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.867248 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.867276 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.867357 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.867382 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.970564 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.970623 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.970633 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.970651 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:52 crc kubenswrapper[4836]: I0217 14:06:52.970661 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:52Z","lastTransitionTime":"2026-02-17T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.073092 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.073143 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.073154 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.073174 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.073188 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.175815 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.175869 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.175881 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.175900 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.175912 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.278626 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.278673 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.278684 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.278701 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.278715 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.381968 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.382005 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.382016 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.382035 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.382048 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.485086 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.485121 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.485130 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.485223 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.485234 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.502282 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:23:06.422771797 +0000 UTC Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.567499 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.567553 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.567507 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:53 crc kubenswrapper[4836]: E0217 14:06:53.567717 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.567823 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:53 crc kubenswrapper[4836]: E0217 14:06:53.568023 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:53 crc kubenswrapper[4836]: E0217 14:06:53.568168 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:53 crc kubenswrapper[4836]: E0217 14:06:53.568262 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.569284 4836 scope.go:117] "RemoveContainer" containerID="a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.590565 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.590633 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.590649 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.590675 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.590700 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.671845 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:53 crc kubenswrapper[4836]: E0217 14:06:53.672103 4836 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:53 crc kubenswrapper[4836]: E0217 14:06:53.672178 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs podName:8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c nodeName:}" failed. No retries permitted until 2026-02-17 14:06:57.672158782 +0000 UTC m=+44.015087051 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs") pod "network-metrics-daemon-c4txt" (UID: "8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.693002 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.693053 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.693066 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.693083 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.693095 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.795343 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.795390 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.795401 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.795419 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.795431 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.898533 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.898589 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.898606 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.898630 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.898646 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:53Z","lastTransitionTime":"2026-02-17T14:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.907054 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.910291 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c"} Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.910903 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.947179 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:53Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.966605 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:53Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:53 crc kubenswrapper[4836]: I0217 14:06:53.988524 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:53Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.001481 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.001525 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.001542 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.001564 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.001579 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.007528 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.020958 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.037736 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.063832 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"flector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167820 6099 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167887 6099 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.168346 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 14:06:48.168371 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 14:06:48.168384 6099 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 14:06:48.168403 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 14:06:48.168427 6099 factory.go:656] Stopping watch factory\\\\nI0217 14:06:48.168443 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 14:06:48.168450 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 14:06:48.168457 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 14:06:48.168463 6099 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"rics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 14:06:49.751267 6265 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 14:06:49.751271 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vt5sw\\\\nI0217 14:06:49.751281 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751311 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751325 6265 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0217 14:06:49.751329 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0217 14:06:49.751338 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0217 14:06:49.751346 6265 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.085612 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.104222 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.104252 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.104260 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.104274 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.104283 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.105369 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.124387 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.174639 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.194489 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.207516 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.207965 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.208003 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.208015 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.208034 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.208047 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.221862 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.238986 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.252174 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.264229 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.310717 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.310756 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.310765 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.310778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.310786 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.412743 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.412781 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.412793 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.412807 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.412817 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.503068 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:51:49.562127583 +0000 UTC Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.514924 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.514966 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.514976 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.514993 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.515006 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.583768 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.594332 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.604243 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.618174 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.618217 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.618228 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.618242 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.618252 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.624996 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efe40e27f5a0b564251e0f62c8b242039a04e92beaa2f621047a67d94afda3e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"message\\\":\\\"flector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167820 6099 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.167887 6099 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 14:06:48.168346 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 14:06:48.168371 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 14:06:48.168384 6099 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 14:06:48.168403 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 14:06:48.168427 6099 factory.go:656] Stopping watch factory\\\\nI0217 14:06:48.168443 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 14:06:48.168450 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 14:06:48.168457 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 14:06:48.168463 6099 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"rics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 14:06:49.751267 6265 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 14:06:49.751271 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vt5sw\\\\nI0217 14:06:49.751281 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751311 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751325 6265 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0217 14:06:49.751329 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0217 14:06:49.751338 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0217 14:06:49.751346 6265 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.642334 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.655050 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.672642 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.687024 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.700805 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.714472 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.720514 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.720590 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.720604 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.720632 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.720647 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.732560 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.746947 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.759237 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.779535 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.794003 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.808708 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.820944 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:06:54Z is after 2025-08-24T17:21:41Z" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.822808 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.822841 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.822849 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.822862 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.822873 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.928619 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.928678 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.928690 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.928711 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:54 crc kubenswrapper[4836]: I0217 14:06:54.928724 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:54Z","lastTransitionTime":"2026-02-17T14:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.031314 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.031367 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.031378 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.031397 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.031408 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.134047 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.134086 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.134096 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.134110 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.134121 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.237115 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.237150 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.237159 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.237176 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.237189 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.340136 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.340196 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.340206 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.340220 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.340230 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.443928 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.443996 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.444010 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.444029 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.444047 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.504134 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:09:12.861823943 +0000 UTC Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.547652 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.547721 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.547748 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.547778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.547805 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.567888 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.567929 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.568034 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.568119 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:55 crc kubenswrapper[4836]: E0217 14:06:55.568122 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:55 crc kubenswrapper[4836]: E0217 14:06:55.568291 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:55 crc kubenswrapper[4836]: E0217 14:06:55.568423 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:55 crc kubenswrapper[4836]: E0217 14:06:55.568456 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.650617 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.650643 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.650651 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.650664 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.650672 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.756958 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.757411 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.757428 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.757447 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.757458 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.860330 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.860415 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.860433 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.860451 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.860463 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.962358 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.962410 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.962421 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.962437 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:55 crc kubenswrapper[4836]: I0217 14:06:55.962449 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:55Z","lastTransitionTime":"2026-02-17T14:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.065076 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.065117 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.065127 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.065140 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.065149 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.168073 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.168110 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.168118 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.168131 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.168146 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.270357 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.270404 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.270415 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.270460 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.270477 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.374196 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.374272 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.374345 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.374378 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.374402 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.476980 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.477024 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.477035 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.477048 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.477058 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.504850 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:07:36.316671926 +0000 UTC Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.579912 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.580000 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.580027 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.580053 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.580075 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.682428 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.682476 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.682488 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.682505 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.682515 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.784176 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.784225 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.784241 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.784258 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.784267 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.887281 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.887387 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.887409 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.887433 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.887451 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.989929 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.989994 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.990012 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.990035 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:56 crc kubenswrapper[4836]: I0217 14:06:56.990053 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:56Z","lastTransitionTime":"2026-02-17T14:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.092621 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.092691 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.092712 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.092736 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.092752 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.195354 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.195412 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.195422 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.195437 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.195447 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.297782 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.297838 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.297851 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.297870 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.297882 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.401360 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.401426 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.401439 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.401466 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.401478 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.504140 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.504200 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.504215 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.504233 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.504243 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.505611 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:39:32.111427327 +0000 UTC Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.567650 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.567685 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.567724 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:57 crc kubenswrapper[4836]: E0217 14:06:57.567826 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:57 crc kubenswrapper[4836]: E0217 14:06:57.567885 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.567661 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:57 crc kubenswrapper[4836]: E0217 14:06:57.568135 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:06:57 crc kubenswrapper[4836]: E0217 14:06:57.568371 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.607242 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.607308 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.607323 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.607339 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.607351 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.709565 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.709622 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.709634 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.709652 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.709667 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.716158 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:57 crc kubenswrapper[4836]: E0217 14:06:57.716316 4836 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:57 crc kubenswrapper[4836]: E0217 14:06:57.716374 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs podName:8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c nodeName:}" failed. No retries permitted until 2026-02-17 14:07:05.716358297 +0000 UTC m=+52.059286566 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs") pod "network-metrics-daemon-c4txt" (UID: "8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.811861 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.811902 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.811912 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.811931 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.811941 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.915015 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.915091 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.915116 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.915143 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:57 crc kubenswrapper[4836]: I0217 14:06:57.915158 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:57Z","lastTransitionTime":"2026-02-17T14:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.017880 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.017928 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.017938 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.017956 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.017974 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.121795 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.121857 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.121871 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.121888 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.121906 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.224810 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.224863 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.224880 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.224900 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.224914 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.327330 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.327382 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.327396 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.327414 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.327427 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.431114 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.431160 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.431169 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.431183 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.431193 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.506117 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:07:14.109075669 +0000 UTC Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.534331 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.534395 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.534414 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.534441 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.534459 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.638217 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.638269 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.638278 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.638307 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.638318 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.741122 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.741158 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.741176 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.741193 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.741204 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.843391 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.843440 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.843452 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.843468 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.843479 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.946489 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.946558 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.946573 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.946596 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:58 crc kubenswrapper[4836]: I0217 14:06:58.946608 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:58Z","lastTransitionTime":"2026-02-17T14:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.049381 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.049450 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.049468 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.049490 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.049509 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.151614 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.151649 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.151657 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.151671 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.151681 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.254626 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.254672 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.254682 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.254696 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.254705 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.357514 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.357568 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.357647 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.357664 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.357676 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.460823 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.460863 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.460873 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.460890 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.460901 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.506478 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:43:44.054808581 +0000 UTC Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.564588 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.564645 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.564657 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.564675 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.564686 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.568036 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.568088 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.568088 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.568031 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:06:59 crc kubenswrapper[4836]: E0217 14:06:59.568218 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:06:59 crc kubenswrapper[4836]: E0217 14:06:59.568449 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:06:59 crc kubenswrapper[4836]: E0217 14:06:59.568614 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:06:59 crc kubenswrapper[4836]: E0217 14:06:59.568703 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.667247 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.667330 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.667342 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.667358 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.667391 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.770516 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.770606 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.770633 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.770666 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.770689 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.873096 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.873192 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.873212 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.873234 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.873248 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.976116 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.976152 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.976161 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.976174 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:06:59 crc kubenswrapper[4836]: I0217 14:06:59.976183 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:06:59Z","lastTransitionTime":"2026-02-17T14:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.078893 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.079004 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.079026 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.079049 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.079069 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.182376 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.182420 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.182431 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.182445 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.182457 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.284954 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.284997 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.285006 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.285081 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.285092 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.389322 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.389402 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.389423 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.389451 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.389467 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.492540 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.492597 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.492607 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.492623 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.492633 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.506884 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:23:37.451846543 +0000 UTC Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.594706 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.594756 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.594768 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.594787 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.594799 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.697559 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.697590 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.697599 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.697612 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.697620 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.799450 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.799491 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.799501 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.799516 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.799526 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.902288 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.902370 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.902381 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.902425 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:00 crc kubenswrapper[4836]: I0217 14:07:00.902442 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:00Z","lastTransitionTime":"2026-02-17T14:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.004588 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.004663 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.004675 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.004694 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.004705 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.107013 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.107054 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.107066 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.107081 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.107092 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.209165 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.209212 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.209223 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.209238 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.209249 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.311199 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.311244 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.311255 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.311269 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.311279 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.414236 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.414313 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.414326 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.414344 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.414382 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.507156 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:16:53.037740429 +0000 UTC Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.517152 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.517213 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.517229 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.517251 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.517265 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.567100 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:01 crc kubenswrapper[4836]: E0217 14:07:01.567251 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.568142 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.568254 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:01 crc kubenswrapper[4836]: E0217 14:07:01.568365 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.568456 4836 scope.go:117] "RemoveContainer" containerID="b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f" Feb 17 14:07:01 crc kubenswrapper[4836]: E0217 14:07:01.568509 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.568759 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:01 crc kubenswrapper[4836]: E0217 14:07:01.569540 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.597267 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.610025 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.620217 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.628837 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.629009 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.629115 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.629259 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.633932 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.648672 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.659743 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.669764 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.690966 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"rics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 14:06:49.751267 6265 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 14:06:49.751271 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vt5sw\\\\nI0217 14:06:49.751281 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751311 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751325 6265 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0217 14:06:49.751329 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0217 14:06:49.751338 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0217 14:06:49.751346 6265 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.704510 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.716272 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.728005 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.732467 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.732502 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.732514 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.732530 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.732542 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.743079 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.763693 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.776553 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.790320 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.804553 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.819150 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.832159 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.834759 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.834807 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.834818 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.834835 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.834847 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.937148 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.937200 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.937213 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.937233 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.937247 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:01Z","lastTransitionTime":"2026-02-17T14:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.938148 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/1.log" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.943699 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1"} Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.948459 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.965131 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.979338 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:01 crc kubenswrapper[4836]: I0217 14:07:01.993284 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:01Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.007067 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.018613 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.037475 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.040670 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.040750 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.040758 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.040775 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.040786 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.052513 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.066767 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.081041 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.094287 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.107813 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.118588 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.129353 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.129397 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.129408 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.129428 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.129441 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.139156 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"rics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 14:06:49.751267 6265 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 14:06:49.751271 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vt5sw\\\\nI0217 14:06:49.751281 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751311 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751325 6265 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0217 14:06:49.751329 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0217 14:06:49.751338 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0217 14:06:49.751346 6265 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: E0217 14:07:02.143599 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.149961 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.150023 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.150034 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.150112 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.150135 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.153540 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: E0217 14:07:02.168082 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.169870 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.171913 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.171941 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.171951 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.171965 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.171975 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.185721 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: E0217 14:07:02.196228 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.201038 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.201102 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.201117 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.201136 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.201148 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: E0217 14:07:02.219537 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.219918 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.226074 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.226102 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.226112 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.226126 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.226136 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: E0217 14:07:02.253664 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: E0217 14:07:02.253843 4836 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.255658 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.255693 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.255707 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.255747 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.255758 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.358796 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.358836 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.358845 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.358859 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.358868 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.460986 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.461023 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.461033 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.461051 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.461062 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.507930 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:36:05.386499119 +0000 UTC Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.564348 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.564406 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.564419 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.564449 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.564461 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.667187 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.667238 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.667248 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.667262 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.667270 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.769113 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.769160 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.769172 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.769187 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.769197 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.871978 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.872034 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.872043 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.872056 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.872065 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.950258 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/2.log" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.950932 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/1.log" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.954355 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1" exitCode=1 Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.954410 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.954497 4836 scope.go:117] "RemoveContainer" containerID="b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.955240 4836 scope.go:117] "RemoveContainer" containerID="4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1" Feb 17 14:07:02 crc kubenswrapper[4836]: E0217 14:07:02.955488 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.968436 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.974120 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.974158 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.974167 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.974181 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.974191 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:02Z","lastTransitionTime":"2026-02-17T14:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.981494 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:02 crc kubenswrapper[4836]: I0217 14:07:02.992399 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:02Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.003169 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.020048 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.034097 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.046656 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.060423 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.071917 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.075803 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.075839 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.075849 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.075864 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.075874 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.083610 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.102222 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b66e2b9fc807500c9a546e089579ae9af2485f2fa86b58e90b5979b4d18a052f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"rics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 14:06:49.751267 6265 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 14:06:49.751271 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vt5sw\\\\nI0217 14:06:49.751281 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751311 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0217 14:06:49.751325 6265 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0217 14:06:49.751329 6265 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0217 14:06:49.751338 6265 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nF0217 14:06:49.751346 6265 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.116052 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.125723 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.137871 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.149000 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.162208 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.173514 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.178610 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.178662 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.178679 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.178700 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.178712 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.281781 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.281811 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.281819 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.281832 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.281842 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.384780 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.384825 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.384836 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.384852 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.384864 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.487191 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.487239 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.487251 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.487267 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.487277 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.508951 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:23:17.013253046 +0000 UTC Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.567445 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.567492 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.567515 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:03 crc kubenswrapper[4836]: E0217 14:07:03.567595 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:03 crc kubenswrapper[4836]: E0217 14:07:03.567717 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:03 crc kubenswrapper[4836]: E0217 14:07:03.567831 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.568056 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:03 crc kubenswrapper[4836]: E0217 14:07:03.568205 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.589188 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.589233 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.589242 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.589255 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.589264 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.691665 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.691726 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.691740 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.691762 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.691776 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.794034 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.794107 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.794117 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.794131 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.794142 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.896616 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.896660 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.896671 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.896687 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.896697 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:03Z","lastTransitionTime":"2026-02-17T14:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.959053 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/2.log" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.963242 4836 scope.go:117] "RemoveContainer" containerID="4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1" Feb 17 14:07:03 crc kubenswrapper[4836]: E0217 14:07:03.963474 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:07:03 crc kubenswrapper[4836]: I0217 14:07:03.976230 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:03Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.000133 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.000187 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.000205 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.000227 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.000244 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.008474 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.027109 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.046362 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.064764 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.078643 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.099585 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.103851 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.103916 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.103933 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.103959 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.103975 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.115702 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.130765 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.143412 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.154568 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.174333 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.190792 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.202530 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.206838 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.206873 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.206883 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.206898 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.206914 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.214443 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.232649 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.245957 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.309925 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.309966 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.309977 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.309992 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.310003 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.412934 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.412981 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.412992 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.413011 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.413030 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.509396 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:26:19.990440991 +0000 UTC Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.516099 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.516173 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.516188 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.516210 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.516226 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.579022 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.591021 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.600353 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.618952 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.619009 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.619028 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.619050 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.619067 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.620185 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.635819 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.647942 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.663050 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.676375 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.691232 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.701679 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.720269 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.722260 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.722335 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.722347 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.722363 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.722373 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.733860 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.750466 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.763791 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.777573 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.791406 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.804492 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:04Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.825118 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.825170 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.825181 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.825202 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.825215 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.928646 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.928704 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.928719 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.928737 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:04 crc kubenswrapper[4836]: I0217 14:07:04.928753 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:04Z","lastTransitionTime":"2026-02-17T14:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.030995 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.031062 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.031087 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.031111 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.031126 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.133386 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.133426 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.133441 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.133458 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.133469 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.235694 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.235743 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.235754 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.235772 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.235788 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.338120 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.338166 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.338178 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.338196 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.338210 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.440824 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.440863 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.440875 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.440890 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.440901 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.509880 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:29:28.785126987 +0000 UTC Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.542799 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.542876 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.542893 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.542918 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.542932 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.567209 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.567241 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.567250 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.567270 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:05 crc kubenswrapper[4836]: E0217 14:07:05.567358 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:05 crc kubenswrapper[4836]: E0217 14:07:05.567545 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:05 crc kubenswrapper[4836]: E0217 14:07:05.567629 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:05 crc kubenswrapper[4836]: E0217 14:07:05.567701 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.645380 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.645424 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.645437 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.645454 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.645466 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.748331 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.748380 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.748392 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.748406 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.748415 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.800344 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:05 crc kubenswrapper[4836]: E0217 14:07:05.800531 4836 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:07:05 crc kubenswrapper[4836]: E0217 14:07:05.800587 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs podName:8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c nodeName:}" failed. No retries permitted until 2026-02-17 14:07:21.800572598 +0000 UTC m=+68.143500867 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs") pod "network-metrics-daemon-c4txt" (UID: "8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.851352 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.851405 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.851418 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.851439 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.851448 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.953961 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.954002 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.954013 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.954051 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:05 crc kubenswrapper[4836]: I0217 14:07:05.954063 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:05Z","lastTransitionTime":"2026-02-17T14:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.057079 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.057356 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.057369 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.057389 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.057400 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.160470 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.160525 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.160533 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.160550 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.160560 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.263821 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.263882 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.263895 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.263915 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.263928 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.367001 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.367061 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.367075 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.367106 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.367118 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.469667 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.469740 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.469759 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.469787 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.469805 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.510725 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:34:02.9226518 +0000 UTC Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.572111 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.572189 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.572199 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.572212 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.572223 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.674969 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.675015 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.675030 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.675048 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.675057 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.778000 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.778046 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.778058 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.778075 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.778086 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.880679 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.881109 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.881245 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.881371 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.881468 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.983918 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.983969 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.983981 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.983999 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:06 crc kubenswrapper[4836]: I0217 14:07:06.984012 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:06Z","lastTransitionTime":"2026-02-17T14:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.086686 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.086743 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.086755 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.086769 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.086779 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.189258 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.189343 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.189357 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.189375 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.189387 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.214523 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.214632 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:07:39.214609419 +0000 UTC m=+85.557537698 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.214730 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.214801 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.214878 4836 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.214908 4836 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.214928 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:07:39.214917567 +0000 UTC m=+85.557845836 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.214947 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:07:39.214937147 +0000 UTC m=+85.557865416 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.292440 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.292478 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.292490 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.292508 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.292523 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.316110 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.316154 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.316260 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.316263 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.316320 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.316336 4836 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.316276 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.316389 4836 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.316390 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 14:07:39.316374223 +0000 UTC m=+85.659302512 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.316424 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 14:07:39.316416284 +0000 UTC m=+85.659344553 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.395009 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.395053 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.395065 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.395081 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.395092 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.498866 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.498922 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.498934 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.498953 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.498965 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.511037 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:14:50.178798318 +0000 UTC Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.567926 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.568068 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.568062 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.568077 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.568209 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.568361 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.568453 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:07 crc kubenswrapper[4836]: E0217 14:07:07.568591 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.601594 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.601639 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.601652 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.601669 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.601680 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.704253 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.704360 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.704380 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.704397 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.704409 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.807239 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.807265 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.807273 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.807285 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.807309 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.909025 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.909068 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.909080 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.909096 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:07 crc kubenswrapper[4836]: I0217 14:07:07.909108 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:07Z","lastTransitionTime":"2026-02-17T14:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.011514 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.011824 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.011896 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.011968 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.012040 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.114583 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.114627 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.114638 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.114654 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.114666 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.216857 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.216912 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.216924 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.216941 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.216953 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.319963 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.320057 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.320076 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.320099 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.320116 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.422952 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.423014 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.423031 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.423056 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.423074 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.511783 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:44:40.412569106 +0000 UTC Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.526477 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.526545 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.526604 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.526629 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.526646 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.629398 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.629469 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.629483 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.629504 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.629519 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.733076 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.733124 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.733135 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.733151 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.733164 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.837360 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.837609 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.837725 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.837836 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.837904 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.940490 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.940836 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.940924 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.941036 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:08 crc kubenswrapper[4836]: I0217 14:07:08.941128 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:08Z","lastTransitionTime":"2026-02-17T14:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.043879 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.043932 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.043947 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.043967 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.043982 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.146745 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.146787 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.146798 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.146813 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.146826 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.249926 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.250223 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.250360 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.250441 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.250524 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.340513 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.353183 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.353516 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.353629 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.353750 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.353871 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.361610 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.374399 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.386898 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.388415 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.397270 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.399610 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.410445 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.431645 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.446085 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.455960 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.455993 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.456004 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.456019 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.456030 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.458397 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.469215 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.481433 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.494377 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.512051 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:10:55.402095534 +0000 UTC Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.516641 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.527077 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.536865 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.549453 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.558229 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.558273 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.558283 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.558315 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.558327 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.562380 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.567434 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.567458 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.567466 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.567519 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:09 crc kubenswrapper[4836]: E0217 14:07:09.567548 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:09 crc kubenswrapper[4836]: E0217 14:07:09.567636 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:09 crc kubenswrapper[4836]: E0217 14:07:09.567739 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:09 crc kubenswrapper[4836]: E0217 14:07:09.567796 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.573598 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.584901 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.599077 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.610686 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.629067 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.642254 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.653899 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.661172 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.661205 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.661213 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.661240 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.661249 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.667069 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.677219 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.689822 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.701053 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.720526 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.736059 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.749169 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.764570 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.764612 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.764623 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.764639 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.764650 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.766470 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.779548 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.792955 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.803281 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.813026 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:09Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.867792 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.867834 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.867858 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.867872 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.867882 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.970175 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.970211 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.970219 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.970232 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:09 crc kubenswrapper[4836]: I0217 14:07:09.970240 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:09Z","lastTransitionTime":"2026-02-17T14:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.073198 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.073247 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.073258 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.073277 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.073305 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.175495 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.175577 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.175591 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.175630 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.175645 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.278567 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.278631 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.278644 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.278662 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.278677 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.382144 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.382189 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.382201 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.382408 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.382567 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.485687 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.485748 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.485758 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.485774 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.485789 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.513085 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:53:24.770827489 +0000 UTC Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.587871 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.587913 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.587923 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.587937 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.587946 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.690927 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.690989 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.691002 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.691022 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.691034 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.792983 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.793023 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.793033 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.793048 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.793060 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.895536 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.895580 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.895592 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.895609 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.895623 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.998893 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.998988 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.999024 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.999058 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:10 crc kubenswrapper[4836]: I0217 14:07:10.999082 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:10Z","lastTransitionTime":"2026-02-17T14:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.102125 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.102185 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.102197 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.102222 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.102235 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.204166 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.204225 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.204240 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.204263 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.204506 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.307234 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.307286 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.307319 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.307334 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.307344 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.409248 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.409324 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.409340 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.409356 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.409366 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.512197 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.512260 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.512281 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.512332 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.512352 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.513445 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:13:07.27404931 +0000 UTC Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.567475 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.567523 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.567475 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.567615 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:11 crc kubenswrapper[4836]: E0217 14:07:11.567965 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:11 crc kubenswrapper[4836]: E0217 14:07:11.568289 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:11 crc kubenswrapper[4836]: E0217 14:07:11.568347 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:11 crc kubenswrapper[4836]: E0217 14:07:11.569552 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.615271 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.615375 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.615398 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.615420 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.615438 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.718865 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.718938 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.718961 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.718993 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.719018 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.822759 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.822804 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.822814 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.822830 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.822839 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.925042 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.925135 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.925148 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.925165 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:11 crc kubenswrapper[4836]: I0217 14:07:11.925178 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:11Z","lastTransitionTime":"2026-02-17T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.028468 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.028527 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.028542 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.028561 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.028574 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.130903 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.130964 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.130976 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.130995 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.131012 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.233525 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.233580 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.233591 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.233608 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.233619 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.270350 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.270396 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.270407 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.270420 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.270428 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: E0217 14:07:12.282385 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:12Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.287090 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.287134 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.287143 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.287158 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.287167 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: E0217 14:07:12.299901 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:12Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.303707 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.303744 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.303754 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.303777 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.303788 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: E0217 14:07:12.322068 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:12Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.326195 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.326248 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.326258 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.326271 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.326280 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: E0217 14:07:12.342283 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:12Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.346763 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.346823 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.346842 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.346867 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.346886 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: E0217 14:07:12.362727 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:12Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:12 crc kubenswrapper[4836]: E0217 14:07:12.362847 4836 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.364527 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.364550 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.364558 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.364571 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.364581 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.468263 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.468358 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.468381 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.468406 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.468424 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.513936 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:05:59.153114735 +0000 UTC Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.570768 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.570849 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.570865 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.570917 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.570937 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.674191 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.674259 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.674282 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.674364 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.674389 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.776996 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.777172 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.777207 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.777237 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.777259 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.890804 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.890848 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.890859 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.890879 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.890892 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.992653 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.992689 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.992698 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.992710 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:12 crc kubenswrapper[4836]: I0217 14:07:12.992718 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:12Z","lastTransitionTime":"2026-02-17T14:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.095182 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.095224 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.095236 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.095253 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.095265 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.197937 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.197992 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.198001 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.198016 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.198026 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.300521 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.300563 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.300579 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.300594 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.300606 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.402477 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.402531 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.402544 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.402561 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.402574 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.505520 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.505609 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.505633 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.505664 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.505685 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.514741 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:29:05.7292283 +0000 UTC Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.567088 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.567139 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.567196 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.567141 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:13 crc kubenswrapper[4836]: E0217 14:07:13.567360 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:13 crc kubenswrapper[4836]: E0217 14:07:13.567786 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:13 crc kubenswrapper[4836]: E0217 14:07:13.567986 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:13 crc kubenswrapper[4836]: E0217 14:07:13.568101 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.608643 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.608692 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.608704 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.608723 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.608733 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.711550 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.711613 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.711629 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.711654 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.711673 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.813965 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.814008 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.814017 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.814032 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.814041 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.916078 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.916128 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.916145 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.916163 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:13 crc kubenswrapper[4836]: I0217 14:07:13.916176 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:13Z","lastTransitionTime":"2026-02-17T14:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.018976 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.019030 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.019044 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.019062 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.019075 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.121750 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.121787 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.121795 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.121809 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.121818 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.224668 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.224722 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.224735 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.224751 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.224763 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.327254 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.327371 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.327391 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.327413 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.327429 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.430222 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.430277 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.430333 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.430358 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.430373 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.515508 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:51:37.674642736 +0000 UTC Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.532854 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.532958 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.533020 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.533043 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.533060 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.580866 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.593645 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.605393 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.619876 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.634964 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.636027 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.636062 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.636073 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.636089 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.636102 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.646176 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.666914 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.682267 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.694379 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.706932 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.718617 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.730642 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.739225 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.739271 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.739284 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.739322 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.739334 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.742501 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.767249 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.782597 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.793575 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.808370 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.820228 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:14Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.841070 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.841111 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.841122 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.841136 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.841145 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.943798 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.943834 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.943843 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.943857 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:14 crc kubenswrapper[4836]: I0217 14:07:14.943868 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:14Z","lastTransitionTime":"2026-02-17T14:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.046138 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.046479 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.046488 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.046502 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.046511 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.149569 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.149614 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.149624 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.149639 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.149651 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.252155 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.252220 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.252239 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.252264 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.252281 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.356321 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.356365 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.356373 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.356386 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.356396 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.459798 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.459850 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.459866 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.459886 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.459901 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.515645 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:21:37.347378851 +0000 UTC Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.563685 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.563732 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.563743 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.563757 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.563766 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.567413 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.567501 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.567520 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.567677 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:15 crc kubenswrapper[4836]: E0217 14:07:15.567674 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:15 crc kubenswrapper[4836]: E0217 14:07:15.567797 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:15 crc kubenswrapper[4836]: E0217 14:07:15.567870 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:15 crc kubenswrapper[4836]: E0217 14:07:15.567929 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.667424 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.667493 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.667709 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.667737 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.667754 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.770971 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.771010 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.771021 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.771037 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.771049 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.874217 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.874317 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.874337 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.874357 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.874375 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.977247 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.977287 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.977320 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.977336 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:15 crc kubenswrapper[4836]: I0217 14:07:15.977346 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:15Z","lastTransitionTime":"2026-02-17T14:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.080121 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.080156 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.080165 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.080181 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.080191 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.183197 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.183244 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.183260 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.183282 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.183318 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.286680 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.286739 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.286752 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.286771 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.286785 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.389472 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.389508 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.389518 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.389534 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.389545 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.492717 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.492791 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.492808 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.492826 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.492839 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.517072 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:59:40.645254981 +0000 UTC Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.595670 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.595743 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.595754 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.595769 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.595781 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.707139 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.707252 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.707269 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.707355 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.707382 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.810459 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.810516 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.810528 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.810564 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.810577 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.913101 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.913137 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.913149 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.913165 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:16 crc kubenswrapper[4836]: I0217 14:07:16.913178 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:16Z","lastTransitionTime":"2026-02-17T14:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.017012 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.017111 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.017135 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.017162 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.017173 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.120179 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.120230 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.120241 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.120258 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.120270 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.223328 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.223402 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.223413 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.223431 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.223444 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.325774 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.325830 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.325879 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.325899 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.325911 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.428173 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.428224 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.428236 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.428252 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.428261 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.517404 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:38:13.192741311 +0000 UTC Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.532512 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.532540 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.532550 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.532570 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.532586 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.567374 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:17 crc kubenswrapper[4836]: E0217 14:07:17.567898 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.568414 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.568426 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.568592 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:17 crc kubenswrapper[4836]: E0217 14:07:17.569362 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:17 crc kubenswrapper[4836]: E0217 14:07:17.569506 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:17 crc kubenswrapper[4836]: E0217 14:07:17.569616 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.634896 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.634931 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.634939 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.634952 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.634961 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.737884 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.737927 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.737937 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.737950 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.737960 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.841464 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.841514 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.841526 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.841543 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.841554 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.943926 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.943971 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.943981 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.943994 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:17 crc kubenswrapper[4836]: I0217 14:07:17.944003 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:17Z","lastTransitionTime":"2026-02-17T14:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.046740 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.046788 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.046800 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.046817 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.046832 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.150958 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.151009 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.151019 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.151034 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.151044 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.253940 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.254006 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.254066 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.254089 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.254105 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.357250 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.357313 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.357324 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.357339 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.357353 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.459831 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.459901 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.459914 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.459929 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.459940 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.518168 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:49:23.77647241 +0000 UTC Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.562398 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.562470 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.562481 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.562495 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.562506 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.664709 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.664756 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.664770 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.664788 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.664800 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.767938 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.767990 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.768003 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.768021 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.768035 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.870523 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.870577 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.870602 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.870629 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.870646 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.976210 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.976264 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.976276 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.976309 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:18 crc kubenswrapper[4836]: I0217 14:07:18.976324 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:18Z","lastTransitionTime":"2026-02-17T14:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.078624 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.078668 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.078680 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.078696 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.078707 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.181435 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.181473 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.181483 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.181499 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.181509 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.284608 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.284648 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.284674 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.284688 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.284699 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.387611 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.387659 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.387669 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.387684 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.387693 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.490085 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.490126 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.490135 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.490150 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.490160 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.518448 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:21:40.03534261 +0000 UTC Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.567555 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.567611 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.567636 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:19 crc kubenswrapper[4836]: E0217 14:07:19.567676 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.567750 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:19 crc kubenswrapper[4836]: E0217 14:07:19.567875 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:19 crc kubenswrapper[4836]: E0217 14:07:19.568207 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:19 crc kubenswrapper[4836]: E0217 14:07:19.568325 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.568455 4836 scope.go:117] "RemoveContainer" containerID="4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1" Feb 17 14:07:19 crc kubenswrapper[4836]: E0217 14:07:19.568656 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.592107 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.592155 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.592167 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.592183 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.592196 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.694550 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.694593 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.694605 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.694626 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.694637 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.796560 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.796606 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.796615 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.796632 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.796642 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.898887 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.898928 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.898939 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.898952 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:19 crc kubenswrapper[4836]: I0217 14:07:19.898963 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:19Z","lastTransitionTime":"2026-02-17T14:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.001622 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.001659 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.001669 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.001685 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.001696 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.103398 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.103437 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.103450 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.103464 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.103476 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.206174 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.206208 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.206216 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.206231 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.206240 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.308853 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.308895 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.308906 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.308921 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.308932 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.411589 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.411629 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.411640 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.411656 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.411666 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.513858 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.514130 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.514230 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.514344 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.514428 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.519198 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:02:02.10399266 +0000 UTC Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.616798 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.616835 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.616842 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.616856 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.616865 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.718981 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.719276 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.719377 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.719472 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.719564 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.821447 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.821504 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.821516 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.821542 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.821555 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.924053 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.924109 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.924121 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.924141 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:20 crc kubenswrapper[4836]: I0217 14:07:20.924154 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:20Z","lastTransitionTime":"2026-02-17T14:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.026469 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.026765 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.026909 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.027038 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.027142 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.129728 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.129979 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.130046 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.130149 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.130222 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.232387 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.232441 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.232451 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.232464 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.232475 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.335379 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.335431 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.335442 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.335458 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.335470 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.437213 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.437244 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.437252 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.437264 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.437275 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.520006 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:24:47.01753924 +0000 UTC Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.539878 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.539921 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.539937 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.539953 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.539964 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.567032 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.567098 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.567055 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.567106 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:21 crc kubenswrapper[4836]: E0217 14:07:21.567185 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:21 crc kubenswrapper[4836]: E0217 14:07:21.567280 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:21 crc kubenswrapper[4836]: E0217 14:07:21.567375 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:21 crc kubenswrapper[4836]: E0217 14:07:21.567449 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.642388 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.642444 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.642457 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.642477 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.642489 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.745022 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.745081 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.745092 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.745108 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.745120 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.846854 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.846892 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.846900 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.846912 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.846920 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.882624 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:21 crc kubenswrapper[4836]: E0217 14:07:21.882858 4836 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:07:21 crc kubenswrapper[4836]: E0217 14:07:21.882961 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs podName:8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c nodeName:}" failed. No retries permitted until 2026-02-17 14:07:53.882936762 +0000 UTC m=+100.225865071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs") pod "network-metrics-daemon-c4txt" (UID: "8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.949545 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.949889 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.949970 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.950044 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:21 crc kubenswrapper[4836]: I0217 14:07:21.950129 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:21Z","lastTransitionTime":"2026-02-17T14:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.052690 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.052740 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.052752 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.052773 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.052785 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.155619 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.155669 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.155685 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.155709 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.155726 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.257775 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.257805 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.257814 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.257829 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.257838 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.360547 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.360613 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.360625 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.360640 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.360651 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.408006 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.408336 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.408443 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.408536 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.408626 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: E0217 14:07:22.421993 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:22Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.426398 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.426463 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.426480 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.426509 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.426548 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: E0217 14:07:22.441908 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:22Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.445685 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.445720 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.445728 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.445742 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.445752 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: E0217 14:07:22.456921 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:22Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.460228 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.460256 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.460264 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.460277 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.460287 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: E0217 14:07:22.472426 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:22Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.475729 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.475762 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.475771 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.475784 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.475792 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: E0217 14:07:22.487045 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:22Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:22 crc kubenswrapper[4836]: E0217 14:07:22.487163 4836 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.488625 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.488654 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.488664 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.488678 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.488689 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.521165 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:04:09.502457403 +0000 UTC Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.590726 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.590774 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.590786 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.590801 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.590813 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.693459 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.693512 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.693524 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.693543 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.693555 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.795566 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.795610 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.795622 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.795640 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.795656 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.898087 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.898127 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.898140 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.898156 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:22 crc kubenswrapper[4836]: I0217 14:07:22.898166 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:22Z","lastTransitionTime":"2026-02-17T14:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.000928 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.000973 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.000983 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.000996 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.001006 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.103415 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.103476 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.103490 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.103510 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.103524 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.206044 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.206084 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.206095 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.206110 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.206122 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.309284 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.309339 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.309350 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.309364 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.309374 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.412289 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.412351 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.412363 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.412380 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.412392 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.514867 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.514904 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.514915 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.514932 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.514945 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.521875 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:01:48.085579305 +0000 UTC Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.567275 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.567350 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:23 crc kubenswrapper[4836]: E0217 14:07:23.567419 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.567289 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.567516 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:23 crc kubenswrapper[4836]: E0217 14:07:23.567575 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:23 crc kubenswrapper[4836]: E0217 14:07:23.567652 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:23 crc kubenswrapper[4836]: E0217 14:07:23.567713 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.617587 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.617634 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.617644 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.617684 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.617694 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.720465 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.720518 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.720537 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.720556 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.720566 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.823357 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.823408 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.823421 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.823436 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.823448 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.925667 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.925710 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.925721 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.925736 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:23 crc kubenswrapper[4836]: I0217 14:07:23.925749 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:23Z","lastTransitionTime":"2026-02-17T14:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.027882 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.027924 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.027936 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.027951 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.027961 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.130600 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.130633 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.130642 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.130659 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.130670 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.232215 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.232255 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.232266 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.232280 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.232310 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.334519 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.334553 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.334562 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.334574 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.334583 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.437657 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.437708 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.437720 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.437746 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.437757 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.522114 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:13:08.177979524 +0000 UTC Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.540666 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.540713 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.540733 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.540778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.540793 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.587665 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.600146 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.609708 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.619577 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.640007 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.644284 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.644327 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.644337 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.644352 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.644363 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.656222 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.668609 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.681380 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.695902 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.707382 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.718691 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.735741 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.745782 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.745871 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.745889 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.745912 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.745925 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.750412 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.760118 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.774603 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.785461 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.795219 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.805829 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:24Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.848073 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.848139 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.848149 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.848169 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.848184 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.950888 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.950927 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.950942 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.950962 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:24 crc kubenswrapper[4836]: I0217 14:07:24.950975 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:24Z","lastTransitionTime":"2026-02-17T14:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.027664 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/0.log" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.027747 4836 generic.go:334] "Generic (PLEG): container finished" podID="592aa549-1b1b-441e-93e4-0821e05ff2b2" containerID="d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc" exitCode=1 Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.027793 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c76cc" event={"ID":"592aa549-1b1b-441e-93e4-0821e05ff2b2","Type":"ContainerDied","Data":"d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.028206 4836 scope.go:117] "RemoveContainer" containerID="d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.040954 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.052805 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.054484 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.054510 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.054520 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.054535 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.054545 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.064601 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.084512 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.096406 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.109488 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.131563 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.144694 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.155710 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.157791 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.157816 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.157826 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.157864 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.157875 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.167224 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.184860 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.199011 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.210200 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.224199 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.234882 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.249490 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.260899 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.260940 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.260948 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.260963 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.260973 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.264611 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:24Z\\\",\\\"message\\\":\\\"2026-02-17T14:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4\\\\n2026-02-17T14:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4 to /host/opt/cni/bin/\\\\n2026-02-17T14:06:39Z [verbose] multus-daemon started\\\\n2026-02-17T14:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-17T14:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.275503 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:25Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.362889 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.362935 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.362948 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.362966 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.362977 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.465625 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.465668 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.465677 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.465692 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.465700 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.522362 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:21:02.339431977 +0000 UTC Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.566986 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:25 crc kubenswrapper[4836]: E0217 14:07:25.567115 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.567123 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:25 crc kubenswrapper[4836]: E0217 14:07:25.567273 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.567371 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:25 crc kubenswrapper[4836]: E0217 14:07:25.567443 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.567586 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:25 crc kubenswrapper[4836]: E0217 14:07:25.567658 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.568460 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.568483 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.568491 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.568502 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.568512 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.671163 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.671192 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.671200 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.671213 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.671223 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.774582 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.774646 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.774668 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.774696 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.774718 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.877063 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.877100 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.877108 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.877121 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.877129 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.979903 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.979948 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.979960 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.979976 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:25 crc kubenswrapper[4836]: I0217 14:07:25.979990 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:25Z","lastTransitionTime":"2026-02-17T14:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.031612 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/0.log" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.031664 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c76cc" event={"ID":"592aa549-1b1b-441e-93e4-0821e05ff2b2","Type":"ContainerStarted","Data":"b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.045923 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.057590 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.070044 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.081798 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.081832 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.081843 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.081856 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.081865 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.088611 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.103106 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.114795 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.127042 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.140829 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.151239 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.162316 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.173797 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.184462 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.184499 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.184510 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.184527 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.184538 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.184672 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:24Z\\\",\\\"message\\\":\\\"2026-02-17T14:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4\\\\n2026-02-17T14:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4 to /host/opt/cni/bin/\\\\n2026-02-17T14:06:39Z [verbose] multus-daemon started\\\\n2026-02-17T14:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-17T14:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.194272 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.204236 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.222447 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.233739 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.246848 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.259730 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:26Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.286424 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.286449 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.286457 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.286472 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.286481 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.388639 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.388695 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.388704 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.388717 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.388726 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.491378 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.491424 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.491436 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.491451 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.491465 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.522963 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:56:33.039720854 +0000 UTC Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.593245 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.593274 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.593281 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.593313 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.593322 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.696065 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.696116 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.696129 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.696146 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.696158 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.798387 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.798429 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.798438 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.798453 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.798462 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.901175 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.901224 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.901236 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.901253 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:26 crc kubenswrapper[4836]: I0217 14:07:26.901265 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:26Z","lastTransitionTime":"2026-02-17T14:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.003961 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.004015 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.004025 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.004043 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.004052 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.106446 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.106493 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.106503 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.106519 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.106530 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.209125 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.209170 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.209183 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.209200 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.209211 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.312020 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.312075 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.312085 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.312101 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.312112 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.414693 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.414765 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.414777 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.414814 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.414825 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.517159 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.517191 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.517202 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.517218 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.517229 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.523498 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:48:27.143352604 +0000 UTC Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.567996 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.568042 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.568099 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.568138 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:27 crc kubenswrapper[4836]: E0217 14:07:27.568229 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:27 crc kubenswrapper[4836]: E0217 14:07:27.568418 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:27 crc kubenswrapper[4836]: E0217 14:07:27.568512 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:27 crc kubenswrapper[4836]: E0217 14:07:27.568655 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.619600 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.619643 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.619654 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.619667 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.619675 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.722243 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.722285 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.722318 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.722336 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.722347 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.824265 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.824336 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.824349 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.824364 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.824375 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.927096 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.927139 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.927151 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.927166 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:27 crc kubenswrapper[4836]: I0217 14:07:27.927194 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:27Z","lastTransitionTime":"2026-02-17T14:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.029653 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.029719 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.029754 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.029778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.029793 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.132529 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.132600 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.132611 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.132629 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.132662 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.234688 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.234733 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.234746 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.234763 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.234773 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.338519 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.338648 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.338672 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.338694 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.338706 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.441467 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.441517 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.441529 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.441545 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.441556 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.524456 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:48:24.594580092 +0000 UTC Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.543725 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.543773 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.543789 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.543821 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.543834 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.646532 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.646592 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.646605 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.646622 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.646633 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.748886 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.748933 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.748943 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.748959 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.748989 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.850643 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.851221 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.851240 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.851256 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.851267 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.953604 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.953638 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.953646 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.953659 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:28 crc kubenswrapper[4836]: I0217 14:07:28.953668 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:28Z","lastTransitionTime":"2026-02-17T14:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.055676 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.055757 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.055769 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.055787 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.055798 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.158822 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.158944 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.158963 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.158985 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.159002 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.262661 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.262714 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.262726 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.262743 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.262752 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.365801 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.365864 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.365875 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.365893 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.365903 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.468475 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.468538 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.468553 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.468570 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.468904 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.525628 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:02:29.486625263 +0000 UTC Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.567704 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.567728 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.567741 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:29 crc kubenswrapper[4836]: E0217 14:07:29.568084 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:29 crc kubenswrapper[4836]: E0217 14:07:29.568225 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:29 crc kubenswrapper[4836]: E0217 14:07:29.568323 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.568640 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:29 crc kubenswrapper[4836]: E0217 14:07:29.568745 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.571590 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.571667 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.571675 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.571691 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.571701 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.673802 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.673846 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.673857 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.673874 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.673886 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.777100 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.777174 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.777190 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.777712 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.777770 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.881083 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.881159 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.881183 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.881212 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.881233 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.984115 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.984169 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.984185 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.984204 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:29 crc kubenswrapper[4836]: I0217 14:07:29.984218 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:29Z","lastTransitionTime":"2026-02-17T14:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.086872 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.086917 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.086932 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.086949 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.086961 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.189481 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.189533 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.189544 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.189563 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.189574 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.292529 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.292584 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.292597 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.292615 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.292628 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.395171 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.395260 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.395280 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.395323 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.395340 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.497637 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.497704 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.497715 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.497729 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.497739 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.526243 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:38:40.528963581 +0000 UTC Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.599664 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.599717 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.599733 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.599754 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.599770 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.702921 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.702990 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.703004 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.703025 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.703040 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.805697 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.805763 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.805784 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.805804 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.805818 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.908206 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.908268 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.908279 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.908311 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:30 crc kubenswrapper[4836]: I0217 14:07:30.908324 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:30Z","lastTransitionTime":"2026-02-17T14:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.011108 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.011155 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.011164 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.011178 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.011190 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.113472 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.113507 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.113518 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.113533 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.113547 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.215829 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.215883 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.215895 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.215917 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.215928 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.318617 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.318650 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.318659 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.318674 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.318683 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.422086 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.422124 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.422135 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.422150 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.422162 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.525872 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.525963 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.525986 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.526016 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.526033 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.526676 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:55:16.977807636 +0000 UTC Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.567634 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:31 crc kubenswrapper[4836]: E0217 14:07:31.567767 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.567966 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:31 crc kubenswrapper[4836]: E0217 14:07:31.568066 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.568147 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.568199 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:31 crc kubenswrapper[4836]: E0217 14:07:31.568369 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:31 crc kubenswrapper[4836]: E0217 14:07:31.568651 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.628434 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.628465 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.628473 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.628486 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.628495 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.731393 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.731448 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.731464 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.731487 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.731507 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.836242 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.836288 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.836324 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.836341 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.836353 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.938671 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.938718 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.938727 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.938741 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:31 crc kubenswrapper[4836]: I0217 14:07:31.938750 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:31Z","lastTransitionTime":"2026-02-17T14:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.041733 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.041783 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.041793 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.041814 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.041824 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.144119 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.144164 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.144173 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.144187 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.144196 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.247290 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.247347 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.247368 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.247384 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.247396 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.350992 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.351072 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.351090 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.351113 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.351133 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.453485 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.453528 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.453539 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.453554 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.453564 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.527759 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:25:02.760338148 +0000 UTC Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.555347 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.555382 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.555389 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.555403 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.555413 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.567696 4836 scope.go:117] "RemoveContainer" containerID="4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.658007 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.658050 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.658062 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.658080 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.658095 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.760541 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.760572 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.760579 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.760594 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.760603 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.863483 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.863562 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.863584 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.863610 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.863631 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.887375 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.887413 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.887424 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.887441 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.887454 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: E0217 14:07:32.908541 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:32Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.912292 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.912339 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.912348 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.912364 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.912374 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: E0217 14:07:32.924900 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:32Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.928426 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.928473 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.928484 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.928500 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.928511 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: E0217 14:07:32.939859 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:32Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.944104 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.944147 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.944159 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.944175 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.944187 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: E0217 14:07:32.959442 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:32Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.964764 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.964818 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.964831 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.964849 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.964861 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:32 crc kubenswrapper[4836]: E0217 14:07:32.978476 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:32Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:32 crc kubenswrapper[4836]: E0217 14:07:32.978636 4836 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.980501 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.980540 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.980550 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.980563 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:32 crc kubenswrapper[4836]: I0217 14:07:32.980572 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:32Z","lastTransitionTime":"2026-02-17T14:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.057718 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/2.log" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.060772 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.061246 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.089362 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.091496 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.091540 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.091549 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.091563 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.091575 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.112556 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.125821 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.139848 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.155428 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.171103 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.193931 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.193982 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.193993 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.194011 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.194023 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.203496 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.225821 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.240056 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.251834 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.264018 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.275684 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.292968 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.296664 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.296697 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.296707 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.296723 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.296735 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.303678 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.315700 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.328315 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.341151 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:24Z\\\",\\\"message\\\":\\\"2026-02-17T14:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4\\\\n2026-02-17T14:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4 to /host/opt/cni/bin/\\\\n2026-02-17T14:06:39Z [verbose] multus-daemon started\\\\n2026-02-17T14:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-17T14:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.354355 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:33Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.399453 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.399498 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.399507 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.399522 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.399532 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.501835 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.501882 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.501906 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.501926 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.501938 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.528233 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:02:57.355718812 +0000 UTC Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.567724 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.567842 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:33 crc kubenswrapper[4836]: E0217 14:07:33.567951 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.568208 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:33 crc kubenswrapper[4836]: E0217 14:07:33.568280 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.568542 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:33 crc kubenswrapper[4836]: E0217 14:07:33.568605 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:33 crc kubenswrapper[4836]: E0217 14:07:33.568841 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.604724 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.604778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.604792 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.604808 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.604822 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.706960 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.707017 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.707029 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.707046 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.707058 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.809659 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.809769 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.809786 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.810056 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.810071 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.912013 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.912091 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.912102 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.912117 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:33 crc kubenswrapper[4836]: I0217 14:07:33.912129 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:33Z","lastTransitionTime":"2026-02-17T14:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.015201 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.015271 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.015289 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.015339 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.015365 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.066263 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/3.log" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.067116 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/2.log" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.070121 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" exitCode=1 Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.070187 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.070244 4836 scope.go:117] "RemoveContainer" containerID="4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.071288 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:07:34 crc kubenswrapper[4836]: E0217 14:07:34.071884 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.087719 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.099635 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.113003 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.117021 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.117044 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.117054 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.117070 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.117081 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.129119 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.142557 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.154423 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.172525 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:33Z\\\",\\\"message\\\":\\\"02 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571037 6902 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571067 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571422 6902 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.572098 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 14:07:33.572236 6902 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 14:07:33.572310 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 14:07:33.572319 6902 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 14:07:33.572320 6902 factory.go:656] Stopping watch factory\\\\nI0217 14:07:33.639691 6902 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 14:07:33.639727 6902 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 14:07:33.639772 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0217 14:07:33.639792 6902 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 14:07:33.639858 6902 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.190014 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.204694 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.218070 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.218991 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.219034 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.219050 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.219074 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.219090 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.238673 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.254345 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:24Z\\\",\\\"message\\\":\\\"2026-02-17T14:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4\\\\n2026-02-17T14:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4 to /host/opt/cni/bin/\\\\n2026-02-17T14:06:39Z [verbose] multus-daemon started\\\\n2026-02-17T14:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-17T14:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.271289 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.293960 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.308767 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.321719 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.321767 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.321778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.321794 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.321805 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.324802 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.337586 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.349122 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.424618 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.424665 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.424677 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.424696 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.424706 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.527059 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.527118 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.527139 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.527161 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.527174 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.529267 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:33:10.70405295 +0000 UTC Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.585777 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.604737 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.617324 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.629768 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.629835 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.629847 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.629893 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.629912 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.633450 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.646252 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.657257 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.679141 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db5a5e396fbddd65ca0e94f1414b226b324f8bce9dd7c243d40bfbd0d060bb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:02Z\\\",\\\"message\\\":\\\"\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828170 6489 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828220 6489 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0217 14:07:02.828227 6489 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 14:07:02.828241 6489 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.693755ms\\\\nI0217 14:07:02.828254 6489 services_control\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:33Z\\\",\\\"message\\\":\\\"02 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571037 6902 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571067 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571422 6902 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.572098 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 14:07:33.572236 6902 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 14:07:33.572310 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 14:07:33.572319 6902 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 14:07:33.572320 6902 factory.go:656] Stopping watch factory\\\\nI0217 14:07:33.639691 6902 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 14:07:33.639727 6902 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 14:07:33.639772 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0217 14:07:33.639792 6902 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 14:07:33.639858 6902 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.691753 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.704248 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.717134 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.729266 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.732235 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.732274 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.732284 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.732324 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.732343 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.740705 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.749666 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.764584 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.778020 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.788852 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.802992 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.818007 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:24Z\\\",\\\"message\\\":\\\"2026-02-17T14:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4\\\\n2026-02-17T14:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4 to /host/opt/cni/bin/\\\\n2026-02-17T14:06:39Z [verbose] multus-daemon started\\\\n2026-02-17T14:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-17T14:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:34Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.834863 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.834902 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.834911 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.834924 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.834933 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.937064 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.937101 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.937424 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.937772 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:34 crc kubenswrapper[4836]: I0217 14:07:34.937854 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:34Z","lastTransitionTime":"2026-02-17T14:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.041497 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.041553 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.041571 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.041594 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.041610 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.077772 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/3.log" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.083067 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:07:35 crc kubenswrapper[4836]: E0217 14:07:35.083365 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.103466 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.120557 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.144560 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.144618 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.144631 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.144648 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.144659 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.150666 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.168083 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.184085 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.196594 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.210458 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.225074 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.236521 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.247444 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.247504 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.247517 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.247538 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.247552 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.268863 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:33Z\\\",\\\"message\\\":\\\"02 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571037 6902 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571067 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571422 6902 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.572098 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 14:07:33.572236 6902 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 14:07:33.572310 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 14:07:33.572319 6902 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 14:07:33.572320 6902 factory.go:656] Stopping watch factory\\\\nI0217 14:07:33.639691 6902 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 14:07:33.639727 6902 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 14:07:33.639772 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0217 14:07:33.639792 6902 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 14:07:33.639858 6902 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.284698 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.298852 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.315154 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.332039 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.347383 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.349855 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.349897 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.349908 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.349926 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.349936 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.358279 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.369828 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.381818 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:24Z\\\",\\\"message\\\":\\\"2026-02-17T14:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4\\\\n2026-02-17T14:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4 to /host/opt/cni/bin/\\\\n2026-02-17T14:06:39Z [verbose] multus-daemon started\\\\n2026-02-17T14:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-17T14:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:35Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.452018 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.452345 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.452422 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.452513 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.452583 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.529935 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:21:19.640306914 +0000 UTC Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.555437 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.555493 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.555509 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.555530 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.555546 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.568005 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.568110 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:35 crc kubenswrapper[4836]: E0217 14:07:35.568209 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.568233 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:35 crc kubenswrapper[4836]: E0217 14:07:35.568818 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:35 crc kubenswrapper[4836]: E0217 14:07:35.568934 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.568981 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:35 crc kubenswrapper[4836]: E0217 14:07:35.569116 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.658194 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.658239 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.658255 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.658275 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.658337 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.761758 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.761793 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.761801 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.761814 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.761823 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.864207 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.864248 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.864259 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.864274 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.864283 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.968665 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.968695 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.968703 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.968715 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:35 crc kubenswrapper[4836]: I0217 14:07:35.968724 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:35Z","lastTransitionTime":"2026-02-17T14:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.071497 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.071643 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.071657 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.071671 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.071678 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.174647 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.174707 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.174721 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.174742 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.174756 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.276765 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.276806 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.276818 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.276832 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.276841 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.379373 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.379423 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.379437 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.379454 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.379466 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.482129 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.482182 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.482195 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.482213 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.482225 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.530312 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:03:19.085337094 +0000 UTC Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.585358 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.585844 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.585876 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.585908 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.585932 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.688429 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.688477 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.688488 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.688505 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.688517 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.791238 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.791281 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.791306 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.791320 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.791331 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.894258 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.894331 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.894343 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.894363 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.894375 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.997410 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.997468 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.997484 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.997505 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:36 crc kubenswrapper[4836]: I0217 14:07:36.997522 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:36Z","lastTransitionTime":"2026-02-17T14:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.100457 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.100492 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.100502 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.100514 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.100522 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.202590 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.202622 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.202633 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.202649 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.202659 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.305274 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.305335 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.305346 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.305362 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.305376 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.407641 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.407685 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.407696 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.407712 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.407723 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.510240 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.510275 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.510285 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.510313 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.510323 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.530895 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:45:21.917980104 +0000 UTC Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.567764 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.567859 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.567918 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.567891 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:37 crc kubenswrapper[4836]: E0217 14:07:37.568672 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:37 crc kubenswrapper[4836]: E0217 14:07:37.568740 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:37 crc kubenswrapper[4836]: E0217 14:07:37.568816 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:37 crc kubenswrapper[4836]: E0217 14:07:37.568875 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.613657 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.613702 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.613714 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.613734 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.613748 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.716636 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.716690 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.716702 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.716720 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.716731 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.819515 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.819578 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.819591 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.819611 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.819624 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.922364 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.922411 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.922423 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.922439 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:37 crc kubenswrapper[4836]: I0217 14:07:37.922453 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:37Z","lastTransitionTime":"2026-02-17T14:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.024898 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.024959 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.024973 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.024990 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.025006 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.128173 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.129012 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.129070 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.129098 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.129129 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.231716 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.231766 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.231778 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.231795 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.231810 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.334754 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.334826 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.334859 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.334892 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.334913 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.438468 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.438544 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.438560 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.438590 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.438606 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.531228 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:02:39.86197183 +0000 UTC Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.540746 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.540785 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.540793 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.540806 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.540817 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.643489 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.643545 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.643558 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.643576 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.643591 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.746552 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.746614 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.746630 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.746656 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.746673 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.848741 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.848790 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.848799 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.848812 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.848822 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.951962 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.952028 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.952055 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.952109 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:38 crc kubenswrapper[4836]: I0217 14:07:38.952135 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:38Z","lastTransitionTime":"2026-02-17T14:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.054968 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.055015 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.055028 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.055046 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.055057 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.157665 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.157717 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.157727 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.157742 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.157753 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.260058 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.260093 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.260104 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.260116 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.260127 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.268396 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.268486 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.268518 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.268624 4836 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.268678 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.268643807 +0000 UTC m=+149.611572076 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.268745 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.26873629 +0000 UTC m=+149.611664639 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.268720 4836 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.268960 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.268927605 +0000 UTC m=+149.611855874 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.366149 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.366244 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.366268 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.366331 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.366377 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.368983 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.369054 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.369220 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.369260 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.369280 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.369346 4836 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.369365 4836 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.369308 4836 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.369425 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.369403859 +0000 UTC m=+149.712332158 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.369466 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.36944953 +0000 UTC m=+149.712377799 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.469030 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.469083 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.469092 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.469106 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.469115 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.531746 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:28:24.157805793 +0000 UTC Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.566938 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.566972 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.567068 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.567114 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.567384 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.567569 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.567713 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:39 crc kubenswrapper[4836]: E0217 14:07:39.567810 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.572229 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.572262 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.572274 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.572289 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.572321 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.673953 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.673983 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.673992 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.674027 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.674037 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.777277 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.777360 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.777374 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.777394 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.777406 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.879872 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.879908 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.879936 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.879948 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.879958 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.981712 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.981806 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.981823 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.981845 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:39 crc kubenswrapper[4836]: I0217 14:07:39.981862 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:39Z","lastTransitionTime":"2026-02-17T14:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.084447 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.084508 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.084519 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.084536 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.084547 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.187345 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.187409 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.187425 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.187447 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.187464 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.290342 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.290387 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.290397 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.290413 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.290422 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.393018 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.393075 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.393088 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.393104 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.393115 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.497413 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.497492 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.497515 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.497544 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.497567 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.532337 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:22:45.190093635 +0000 UTC Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.600497 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.600539 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.600548 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.600563 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.600574 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.703908 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.703980 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.703993 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.704011 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.704044 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.806265 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.806333 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.806344 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.806357 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.806367 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.908094 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.908131 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.908141 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.908152 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:40 crc kubenswrapper[4836]: I0217 14:07:40.908160 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:40Z","lastTransitionTime":"2026-02-17T14:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.011893 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.011960 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.011978 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.012000 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.012016 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.114641 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.115125 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.115148 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.115174 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.115194 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.217724 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.217785 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.217795 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.217815 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.217827 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.321354 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.321434 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.321450 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.321475 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.321490 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.424258 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.424340 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.424357 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.424381 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.424399 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.527776 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.527827 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.527835 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.527851 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.527860 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.532982 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:21:57.448419359 +0000 UTC Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.567623 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:41 crc kubenswrapper[4836]: E0217 14:07:41.567835 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.568557 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.568630 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.568746 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:41 crc kubenswrapper[4836]: E0217 14:07:41.568656 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:41 crc kubenswrapper[4836]: E0217 14:07:41.568854 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:41 crc kubenswrapper[4836]: E0217 14:07:41.568972 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.631009 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.631064 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.631077 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.631097 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.631108 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.734382 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.734462 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.734481 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.734507 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.734526 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.838432 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.838482 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.838494 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.838512 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.838524 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.942561 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.942628 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.942640 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.942706 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:41 crc kubenswrapper[4836]: I0217 14:07:41.942719 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:41Z","lastTransitionTime":"2026-02-17T14:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.046029 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.046086 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.046096 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.046119 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.046130 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.153528 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.154134 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.154411 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.154451 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.154473 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.258723 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.258776 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.258785 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.258806 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.258818 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.361765 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.361813 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.361925 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.361949 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.361964 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.464740 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.464787 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.464799 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.464814 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.464825 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.533288 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:58:23.448727432 +0000 UTC Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.567884 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.568005 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.568019 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.568036 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.568047 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.669828 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.669884 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.669893 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.669905 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.669933 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.772030 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.772086 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.772097 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.772115 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.772125 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.875078 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.875128 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.875139 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.875155 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.875511 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.978036 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.978093 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.978108 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.978124 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:42 crc kubenswrapper[4836]: I0217 14:07:42.978385 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:42Z","lastTransitionTime":"2026-02-17T14:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.081918 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.081973 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.081989 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.082008 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.082024 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.184826 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.184897 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.184925 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.184956 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.184978 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.200833 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.200906 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.200930 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.200959 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.200985 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.220221 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.225410 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.225450 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.225492 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.225514 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.225529 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.241163 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.244814 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.244841 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.244849 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.244863 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.244872 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.255685 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.259900 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.259959 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.259971 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.260010 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.260022 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.274172 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.277444 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.277487 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.277500 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.277538 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.277556 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.288149 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d638d470-b0e0-4be9-938f-7ec815bf6bd8\\\",\\\"systemUUID\\\":\\\"f194f106-0bf2-4b65-bcb3-5215631b39d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:43Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.288332 4836 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.289715 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.289760 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.289770 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.289786 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.289796 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.391555 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.391823 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.391957 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.392072 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.392162 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.495917 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.495965 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.495982 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.496006 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.496024 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.533501 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:46:54.640625903 +0000 UTC Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.567361 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.567423 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.567430 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.567380 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.567527 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.567682 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.567757 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:43 crc kubenswrapper[4836]: E0217 14:07:43.567858 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.599022 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.599070 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.599113 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.599134 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.599149 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.702483 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.702555 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.702578 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.702608 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.702631 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.805549 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.805629 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.805650 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.805683 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.805704 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.908599 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.908663 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.908680 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.908702 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:43 crc kubenswrapper[4836]: I0217 14:07:43.908720 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:43Z","lastTransitionTime":"2026-02-17T14:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.011612 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.011655 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.011670 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.011690 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.011705 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.117200 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.117280 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.117345 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.117377 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.117411 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.221150 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.221216 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.221250 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.221286 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.221364 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.324650 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.324711 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.324720 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.324738 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.324750 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.429647 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.429688 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.429697 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.429710 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.429719 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.532438 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.532711 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.532765 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.532800 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.532823 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.534634 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:54:20.404045616 +0000 UTC Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.585954 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vt5sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6d1f430-35ed-4c4e-a797-d7a0a5a45266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f1e7f83b2916db4faac1afe1f5107375c0e1674ff1d6f90f1025a9920111e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqtsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vt5sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.612123 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"243121c7-6c63-4df6-a6a6-1ef6770e1125\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0143c73054039d5957c29bdf62217900d803ba2e0046c09dcff149b27fc94995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b1d7908740d818568e489caf3447ab0f5768c180cc6d443adc67414e51cd07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e054e9ec9a1f8d54b704be11ff2e55c9aaad9d717b1817aa553cfbf2cc6ac16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a5d1fd5d2de253417bcb1e4edadc2363a1c259769418eb2b2aab3ff8253de2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf3d3daf43d0da96ad0c1340b3c72815adceec24d0ed5fc97f965d1d93e7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854199187b81a3d6692a05073c35985feb14993f76f4ac86149c2b6bac635e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67b26f0d65128004f7645ac1f54a11bc56909269cb9b9337e447b0d5bf783c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fc4a71369f80c06598341729c7db0cb6e0f33d70b0be64470bc6fd40d1cce2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.634277 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91b942ef1c58993ae6344d4aef5fcfe21434b5f3b3282bdf8070560fa8973a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77390b24cb4a223b8eb6a46db49a4a88b68f87aa0b4b134b45a552c180dec5b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.635342 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.635374 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.635385 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.635409 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.635433 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.653099 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.676027 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6efd6a58b6d90897fa902c8d0c8082ecec2899f16adc3e56b98f0bc5c4640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.691677 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.706076 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.719137 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jlz6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f87e91ef-e64c-45a5-9bd5-cc6537e51b1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03fa870790f606ebf6db3bac825e59b209b7a2bd6cb22264597b30211fbc2fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vbzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jlz6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.739388 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.739431 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.739440 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.739455 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.739466 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.739518 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:33Z\\\",\\\"message\\\":\\\"02 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571037 6902 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571067 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.571422 6902 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 14:07:33.572098 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 14:07:33.572236 6902 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 14:07:33.572310 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 14:07:33.572319 6902 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 14:07:33.572320 6902 factory.go:656] Stopping watch factory\\\\nI0217 14:07:33.639691 6902 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 14:07:33.639727 6902 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 14:07:33.639772 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0217 14:07:33.639792 6902 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 14:07:33.639858 6902 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:07:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zdwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfznp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.754526 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e7e5f3-c249-4609-937e-ffdc78580880\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 14:06:30.193193 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 14:06:30.195418 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-478710080/tls.crt::/tmp/serving-cert-478710080/tls.key\\\\\\\"\\\\nI0217 14:06:35.690345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 14:06:35.701676 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 14:06:35.701714 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 14:06:35.701739 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 14:06:35.701745 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 14:06:35.709986 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 14:06:35.710011 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 14:06:35.710020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 14:06:35.710023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 14:06:35.710027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 14:06:35.710030 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 14:06:35.710186 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 14:06:35.711336 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.769145 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e7e5246-9255-4e1e-95c9-44606b16c1ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125ff8db8d520f18c812d9fabdb8e31c63958022c790a31167fabc3549de5f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd23a5cef9ad5b95a15765a1cc2055e05de0f9821f426745c1e50829ed1c8141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c943885038503c4b93bc7688fb4ac415496280e3f969326883c0d366c66e08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bf4b735f1125a8bb5b1f3f449b651c60fdc4ee62b918e28eb3b13278e1f7d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.784878 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc30b1c397bb81c58abc71fb2c6d71c94b9a0f7962d8278edc4f65f1a1c64ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.803432 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t7845" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3eeaa6bd-bab3-4310-9522-747924f2e825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef47d3331023c2e4524dc2c8958b4ed4e2a499eb4d5f084f70cd4afcca1d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6778ba6ea7cfd1d378cae6dc4330542c35cedaff8e820a2158571f542e0ec55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14eab60dcf798778aa77f01af0307c7a5ef44846ceaf5a12e4a5f529b812aaa4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd65cf675b0d34fc5ea309bdcd17ef648bbf1c0ee6e2ece25f459d8d2403387c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b15d92528d05fd8bcab26df5eeb85616b44e750574cc8effa1f34c39dc329bf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddc9616f0a81f7623eb10b0cdb194656ad856548f814df2e6cebdb2419293f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a96c0c25b8539ae8bb660010bbcd7d147a150bfb653dc725a34bc7e4e4b949b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T14:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grf7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t7845\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.815172 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b98ed9fc-ca9f-49c8-b92d-c5b58df2ce3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac6f7d7722015b1a0d40a119c1c74e8a37a7e96e735f56a93ee2fde68d0e10f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f8fdf114b2f1ca00e55833ea4eeb033f97b9fa48a3308c4db3c7dbc617c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6dh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7nmc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.828523 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c4txt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c4txt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.842056 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a443a5a-83a8-4154-94b7-ace6b324f461\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8787a45a4a94da65a0f67f1a402bfa526fa57631259981ded7ec3569fa977f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21927a2142504e88fd8bb2a254d804c0e13f6a42e81e16c02205292de3de3155\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c21e5f00c6fb44be82d2a907c0f9d17ea47071615370135f0e94a62f1eb7840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.842164 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.842193 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.842203 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.842218 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.842228 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.856716 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c76cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592aa549-1b1b-441e-93e4-0821e05ff2b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T14:07:24Z\\\",\\\"message\\\":\\\"2026-02-17T14:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4\\\\n2026-02-17T14:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_500ea1db-a57d-4559-adb6-4997611db3f4 to /host/opt/cni/bin/\\\\n2026-02-17T14:06:39Z [verbose] multus-daemon started\\\\n2026-02-17T14:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-17T14:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c76cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.868090 4836 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895a19c9-a3f0-4a15-aa19-19347121388c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T14:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bce78b27c441ccb16f0fd2489cbe3e38be0f3d330fcd2b602f62a8ca0aa8616b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T14:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99tf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T14:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bkk9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T14:07:44Z is after 2025-08-24T17:21:41Z" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.943843 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.943878 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.943889 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.943906 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:44 crc kubenswrapper[4836]: I0217 14:07:44.943917 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:44Z","lastTransitionTime":"2026-02-17T14:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.046166 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.046198 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.046206 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.046220 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.046229 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.149148 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.149193 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.149206 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.149225 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.149238 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.252525 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.252695 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.252718 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.252754 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.252791 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.355581 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.355640 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.355651 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.355668 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.355680 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.457543 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.457597 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.457615 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.457638 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.457654 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.535692 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:46:32.808339694 +0000 UTC Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.559904 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.559957 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.559972 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.559994 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.560007 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.567310 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.567542 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.567590 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.567560 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:45 crc kubenswrapper[4836]: E0217 14:07:45.567677 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:45 crc kubenswrapper[4836]: E0217 14:07:45.567768 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:45 crc kubenswrapper[4836]: E0217 14:07:45.567844 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:45 crc kubenswrapper[4836]: E0217 14:07:45.567921 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.662712 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.662751 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.662762 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.662776 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.662787 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.765389 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.765446 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.765461 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.765480 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.765497 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.868312 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.868352 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.868366 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.868385 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.868398 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.970644 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.970700 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.970718 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.970738 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:45 crc kubenswrapper[4836]: I0217 14:07:45.970753 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:45Z","lastTransitionTime":"2026-02-17T14:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.073073 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.073145 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.073172 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.073200 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.073220 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.175934 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.176178 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.176410 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.176516 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.176702 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.279021 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.279058 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.279071 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.279094 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.279108 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.381946 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.381990 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.382001 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.382017 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.382030 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.484471 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.484539 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.484549 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.484562 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.484570 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.536134 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:50:22.6035668 +0000 UTC Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.568394 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:07:46 crc kubenswrapper[4836]: E0217 14:07:46.568570 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.578543 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.586466 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.586486 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.586494 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.586505 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.586513 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.688714 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.689003 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.689084 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.689157 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.689225 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.791567 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.791683 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.791726 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.791739 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.791750 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.894873 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.894916 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.894928 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.894941 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.894949 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.999120 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.999181 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.999205 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.999236 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:46 crc kubenswrapper[4836]: I0217 14:07:46.999255 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:46Z","lastTransitionTime":"2026-02-17T14:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.102477 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.102529 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.102540 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.102558 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.102570 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.205384 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.205421 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.205433 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.205449 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.205460 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.308163 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.308203 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.308216 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.308232 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.308242 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.415248 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.415463 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.415489 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.415520 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.415538 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.518361 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.518450 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.518466 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.518491 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.518510 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.536778 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:58:35.586604921 +0000 UTC Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.567377 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.567692 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.567378 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:47 crc kubenswrapper[4836]: E0217 14:07:47.567889 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:47 crc kubenswrapper[4836]: E0217 14:07:47.567761 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:47 crc kubenswrapper[4836]: E0217 14:07:47.568044 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.567377 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:47 crc kubenswrapper[4836]: E0217 14:07:47.568474 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.620652 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.620943 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.621044 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.621143 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.621443 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.724398 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.724447 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.724459 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.724477 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.724488 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.827264 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.827328 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.827343 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.827358 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.827368 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.930422 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.930465 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.930477 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.930493 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:47 crc kubenswrapper[4836]: I0217 14:07:47.930521 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:47Z","lastTransitionTime":"2026-02-17T14:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.033114 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.033156 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.033164 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.033180 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.033191 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.136600 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.136674 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.136693 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.136716 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.136733 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.240036 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.240161 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.240187 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.240217 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.240240 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.342744 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.342793 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.342805 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.342822 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.342835 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.445123 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.445169 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.445181 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.445199 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.445211 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.537277 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:49:34.849817307 +0000 UTC Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.547878 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.547936 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.547958 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.547988 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.548014 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.651502 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.651579 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.651597 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.651624 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.651642 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.753680 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.753720 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.753729 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.753742 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.753751 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.855990 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.856027 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.856050 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.856079 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.856103 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.957891 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.957930 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.957944 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.957960 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:48 crc kubenswrapper[4836]: I0217 14:07:48.958003 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:48Z","lastTransitionTime":"2026-02-17T14:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.060922 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.060994 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.061020 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.061052 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.061076 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.163132 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.163163 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.163171 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.163184 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.163195 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.265720 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.265774 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.265786 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.265808 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.265820 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.369075 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.369136 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.369154 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.369176 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.369193 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.471580 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.471649 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.471671 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.471696 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.471716 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.538518 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:35:54.929073955 +0000 UTC Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.568043 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.568089 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.568080 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.568144 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:49 crc kubenswrapper[4836]: E0217 14:07:49.568276 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:49 crc kubenswrapper[4836]: E0217 14:07:49.568487 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:49 crc kubenswrapper[4836]: E0217 14:07:49.568619 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:49 crc kubenswrapper[4836]: E0217 14:07:49.568930 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.574563 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.574605 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.574615 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.574630 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.574639 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.676462 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.676496 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.676504 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.676517 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.676533 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.779178 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.779262 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.779285 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.779368 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.779393 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.882350 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.882418 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.882437 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.882465 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.882485 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.984862 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.984894 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.984902 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.984915 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:49 crc kubenswrapper[4836]: I0217 14:07:49.984944 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:49Z","lastTransitionTime":"2026-02-17T14:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.088532 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.088595 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.088608 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.088629 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.088643 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.191250 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.191309 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.191328 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.191346 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.191357 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.293969 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.294005 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.294014 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.294027 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.294035 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.395885 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.395922 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.395931 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.395946 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.395955 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.498976 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.499001 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.499008 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.499021 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.499030 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.539392 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:52:46.854796929 +0000 UTC Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.602187 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.602242 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.602254 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.602269 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.602281 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.705398 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.705445 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.705455 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.705469 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.705478 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.808173 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.808250 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.808278 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.808356 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.808385 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.912529 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.912609 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.912628 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.912657 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:50 crc kubenswrapper[4836]: I0217 14:07:50.912676 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:50Z","lastTransitionTime":"2026-02-17T14:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.015864 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.016124 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.016249 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.016350 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.016419 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.119164 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.119233 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.119246 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.119265 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.119283 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.222151 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.222212 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.222231 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.222251 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.222262 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.324900 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.325248 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.325371 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.325466 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.325555 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.428593 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.428633 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.428644 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.428660 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.428671 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.531800 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.532194 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.532463 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.532704 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.532890 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.540336 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:58:12.314732636 +0000 UTC Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.566948 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.566984 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.566954 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:51 crc kubenswrapper[4836]: E0217 14:07:51.567072 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.566948 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:51 crc kubenswrapper[4836]: E0217 14:07:51.567186 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:51 crc kubenswrapper[4836]: E0217 14:07:51.567241 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:51 crc kubenswrapper[4836]: E0217 14:07:51.567281 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.635823 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.635874 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.635890 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.635909 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.635923 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.738402 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.738462 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.738485 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.738511 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.738529 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.842015 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.842062 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.842075 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.842096 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.842112 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.944563 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.944601 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.944610 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.944626 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:51 crc kubenswrapper[4836]: I0217 14:07:51.944636 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:51Z","lastTransitionTime":"2026-02-17T14:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.047762 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.047834 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.047853 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.047886 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.047924 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.149864 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.149911 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.149923 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.149942 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.149954 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.253002 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.253047 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.253059 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.253076 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.253094 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.355834 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.355897 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.355908 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.355921 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.355930 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.459025 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.459063 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.459073 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.459088 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.459099 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.541230 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:36:28.824234902 +0000 UTC Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.561079 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.561170 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.561188 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.561209 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.561224 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.663078 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.663120 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.663135 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.663150 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.663159 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.766564 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.766632 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.766657 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.766686 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.766703 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.869706 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.869807 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.869825 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.869849 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.869866 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.972706 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.972767 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.972776 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.972794 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:52 crc kubenswrapper[4836]: I0217 14:07:52.972803 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:52Z","lastTransitionTime":"2026-02-17T14:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.075729 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.075801 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.075818 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.075840 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.075860 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:53Z","lastTransitionTime":"2026-02-17T14:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.178972 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.179026 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.179039 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.179057 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.179070 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:53Z","lastTransitionTime":"2026-02-17T14:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.281897 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.281943 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.281954 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.281972 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.281984 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:53Z","lastTransitionTime":"2026-02-17T14:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.384219 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.384274 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.384290 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.384338 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.384351 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:53Z","lastTransitionTime":"2026-02-17T14:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.487085 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.487112 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.487120 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.487132 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.487142 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:53Z","lastTransitionTime":"2026-02-17T14:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.542735 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:21:19.455757075 +0000 UTC Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.567464 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.567482 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.567535 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.567615 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:53 crc kubenswrapper[4836]: E0217 14:07:53.567621 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:53 crc kubenswrapper[4836]: E0217 14:07:53.567690 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:53 crc kubenswrapper[4836]: E0217 14:07:53.567762 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:53 crc kubenswrapper[4836]: E0217 14:07:53.567822 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.589255 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.589289 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.589315 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.589328 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.589337 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:53Z","lastTransitionTime":"2026-02-17T14:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.684903 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.684942 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.684952 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.684967 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.684978 4836 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T14:07:53Z","lastTransitionTime":"2026-02-17T14:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.729414 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns"] Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.730010 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.732757 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.733017 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.733955 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.740551 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.752334 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.752366 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.752386 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.752403 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.752545 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.761882 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.761844054 podStartE2EDuration="1m18.761844054s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:53.760031944 +0000 UTC m=+100.102960263" watchObservedRunningTime="2026-02-17 14:07:53.761844054 +0000 UTC m=+100.104772333" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.848514 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vt5sw" podStartSLOduration=78.848493769 podStartE2EDuration="1m18.848493769s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:53.848222091 +0000 UTC m=+100.191150370" watchObservedRunningTime="2026-02-17 14:07:53.848493769 +0000 UTC m=+100.191422038" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.853660 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.853806 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.853836 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.853861 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.853883 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.854521 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.854554 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.855014 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.861467 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.868803 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.868778581 podStartE2EDuration="7.868778581s" podCreationTimestamp="2026-02-17 14:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:53.868416951 +0000 UTC m=+100.211345230" watchObservedRunningTime="2026-02-17 14:07:53.868778581 +0000 UTC m=+100.211706850" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.872682 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/053cbec1-9d4f-42bc-8df5-28eb6c95a6c0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ff9ns\" (UID: \"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.902055 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.90203656 podStartE2EDuration="1m17.90203656s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:53.884843111 +0000 UTC m=+100.227771400" watchObservedRunningTime="2026-02-17 14:07:53.90203656 +0000 UTC m=+100.244964829" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.917963 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.917944395 podStartE2EDuration="44.917944395s" podCreationTimestamp="2026-02-17 14:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:53.903444617 +0000 UTC m=+100.246372936" watchObservedRunningTime="2026-02-17 14:07:53.917944395 +0000 UTC m=+100.260872664" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.955253 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:53 crc kubenswrapper[4836]: E0217 14:07:53.955510 4836 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:07:53 crc kubenswrapper[4836]: E0217 14:07:53.955615 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs podName:8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c nodeName:}" failed. No retries permitted until 2026-02-17 14:08:57.955594021 +0000 UTC m=+164.298522340 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs") pod "network-metrics-daemon-c4txt" (UID: "8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.957805 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jlz6g" podStartSLOduration=78.95779353 podStartE2EDuration="1m18.95779353s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:53.95708283 +0000 UTC m=+100.300011119" watchObservedRunningTime="2026-02-17 14:07:53.95779353 +0000 UTC m=+100.300721799" Feb 17 14:07:53 crc kubenswrapper[4836]: I0217 14:07:53.998785 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-t7845" podStartSLOduration=78.998768145 podStartE2EDuration="1m18.998768145s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:53.998157548 +0000 UTC m=+100.341085817" watchObservedRunningTime="2026-02-17 14:07:53.998768145 +0000 UTC m=+100.341696414" Feb 17 14:07:54 crc kubenswrapper[4836]: I0217 14:07:54.010659 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7nmc8" podStartSLOduration=78.010636731 podStartE2EDuration="1m18.010636731s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:54.009675736 +0000 UTC m=+100.352604025" watchObservedRunningTime="2026-02-17 14:07:54.010636731 +0000 UTC m=+100.353565000" Feb 17 14:07:54 crc kubenswrapper[4836]: I0217 14:07:54.055823 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" Feb 17 14:07:54 crc kubenswrapper[4836]: I0217 14:07:54.060228 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.060212827 podStartE2EDuration="1m15.060212827s" podCreationTimestamp="2026-02-17 14:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:54.05959087 +0000 UTC m=+100.402519169" watchObservedRunningTime="2026-02-17 14:07:54.060212827 +0000 UTC m=+100.403141096" Feb 17 14:07:54 crc kubenswrapper[4836]: W0217 14:07:54.084614 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod053cbec1_9d4f_42bc_8df5_28eb6c95a6c0.slice/crio-4e76d16e65fe5e76ea24a33a9de21232e9b6592452d31dbe3c10a1103008ab47 WatchSource:0}: Error finding container 4e76d16e65fe5e76ea24a33a9de21232e9b6592452d31dbe3c10a1103008ab47: Status 404 returned error can't find the container with id 4e76d16e65fe5e76ea24a33a9de21232e9b6592452d31dbe3c10a1103008ab47 Feb 17 14:07:54 crc kubenswrapper[4836]: I0217 14:07:54.085640 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-c76cc" podStartSLOduration=79.085616956 podStartE2EDuration="1m19.085616956s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:54.085134843 +0000 UTC m=+100.428063142" watchObservedRunningTime="2026-02-17 14:07:54.085616956 +0000 UTC m=+100.428545235" Feb 17 14:07:54 crc kubenswrapper[4836]: I0217 14:07:54.141124 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" event={"ID":"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0","Type":"ContainerStarted","Data":"4e76d16e65fe5e76ea24a33a9de21232e9b6592452d31dbe3c10a1103008ab47"} Feb 17 14:07:54 crc kubenswrapper[4836]: I0217 14:07:54.543940 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:23:40.301507524 +0000 UTC Feb 17 14:07:54 crc kubenswrapper[4836]: I0217 14:07:54.544022 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 14:07:54 crc kubenswrapper[4836]: I0217 14:07:54.563861 4836 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 14:07:55 crc kubenswrapper[4836]: I0217 14:07:55.145553 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" event={"ID":"053cbec1-9d4f-42bc-8df5-28eb6c95a6c0","Type":"ContainerStarted","Data":"f036d2cecfdd209cf7b301a4b376a4d50f7ea7c9e4e6c1c5524b67376b2b1226"} Feb 17 14:07:55 crc kubenswrapper[4836]: I0217 14:07:55.158756 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ff9ns" podStartSLOduration=80.158717992 podStartE2EDuration="1m20.158717992s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:55.158052395 +0000 UTC m=+101.500980674" watchObservedRunningTime="2026-02-17 14:07:55.158717992 +0000 UTC m=+101.501646251" Feb 17 14:07:55 crc kubenswrapper[4836]: I0217 14:07:55.159080 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podStartSLOduration=80.159074292 podStartE2EDuration="1m20.159074292s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:07:54.097618996 +0000 UTC m=+100.440547275" watchObservedRunningTime="2026-02-17 14:07:55.159074292 +0000 UTC m=+101.502002561" Feb 17 14:07:55 crc kubenswrapper[4836]: I0217 14:07:55.567956 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:55 crc kubenswrapper[4836]: I0217 14:07:55.567958 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:55 crc kubenswrapper[4836]: E0217 14:07:55.568698 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:55 crc kubenswrapper[4836]: I0217 14:07:55.568006 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:55 crc kubenswrapper[4836]: E0217 14:07:55.569080 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:55 crc kubenswrapper[4836]: I0217 14:07:55.567998 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:55 crc kubenswrapper[4836]: E0217 14:07:55.568805 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:55 crc kubenswrapper[4836]: E0217 14:07:55.569622 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:57 crc kubenswrapper[4836]: I0217 14:07:57.566947 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:57 crc kubenswrapper[4836]: I0217 14:07:57.566997 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:57 crc kubenswrapper[4836]: E0217 14:07:57.567093 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:07:57 crc kubenswrapper[4836]: I0217 14:07:57.567109 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:57 crc kubenswrapper[4836]: I0217 14:07:57.567661 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:57 crc kubenswrapper[4836]: E0217 14:07:57.567746 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:57 crc kubenswrapper[4836]: E0217 14:07:57.567747 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:57 crc kubenswrapper[4836]: E0217 14:07:57.567934 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:57 crc kubenswrapper[4836]: I0217 14:07:57.568078 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:07:57 crc kubenswrapper[4836]: E0217 14:07:57.568372 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:07:59 crc kubenswrapper[4836]: I0217 14:07:59.567702 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:07:59 crc kubenswrapper[4836]: I0217 14:07:59.567805 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:07:59 crc kubenswrapper[4836]: I0217 14:07:59.567710 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:07:59 crc kubenswrapper[4836]: E0217 14:07:59.567840 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:07:59 crc kubenswrapper[4836]: I0217 14:07:59.567731 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:07:59 crc kubenswrapper[4836]: E0217 14:07:59.568073 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:07:59 crc kubenswrapper[4836]: E0217 14:07:59.568170 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:07:59 crc kubenswrapper[4836]: E0217 14:07:59.568218 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:01 crc kubenswrapper[4836]: I0217 14:08:01.567224 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:01 crc kubenswrapper[4836]: I0217 14:08:01.567241 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:01 crc kubenswrapper[4836]: I0217 14:08:01.567404 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:01 crc kubenswrapper[4836]: E0217 14:08:01.567409 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:01 crc kubenswrapper[4836]: I0217 14:08:01.567468 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:01 crc kubenswrapper[4836]: E0217 14:08:01.567601 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:01 crc kubenswrapper[4836]: E0217 14:08:01.567779 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:01 crc kubenswrapper[4836]: E0217 14:08:01.567842 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:03 crc kubenswrapper[4836]: I0217 14:08:03.567748 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:03 crc kubenswrapper[4836]: I0217 14:08:03.567796 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:03 crc kubenswrapper[4836]: I0217 14:08:03.567736 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:03 crc kubenswrapper[4836]: I0217 14:08:03.567844 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:03 crc kubenswrapper[4836]: E0217 14:08:03.567903 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:03 crc kubenswrapper[4836]: E0217 14:08:03.568035 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:03 crc kubenswrapper[4836]: E0217 14:08:03.568147 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:03 crc kubenswrapper[4836]: E0217 14:08:03.568329 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:05 crc kubenswrapper[4836]: I0217 14:08:05.568482 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:05 crc kubenswrapper[4836]: E0217 14:08:05.568665 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:05 crc kubenswrapper[4836]: I0217 14:08:05.568958 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:05 crc kubenswrapper[4836]: E0217 14:08:05.569056 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:05 crc kubenswrapper[4836]: I0217 14:08:05.569290 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:05 crc kubenswrapper[4836]: E0217 14:08:05.569428 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:05 crc kubenswrapper[4836]: I0217 14:08:05.569692 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:05 crc kubenswrapper[4836]: E0217 14:08:05.569790 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:07 crc kubenswrapper[4836]: I0217 14:08:07.567409 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:07 crc kubenswrapper[4836]: I0217 14:08:07.567453 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:07 crc kubenswrapper[4836]: I0217 14:08:07.567498 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:07 crc kubenswrapper[4836]: E0217 14:08:07.567551 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:07 crc kubenswrapper[4836]: E0217 14:08:07.567633 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:07 crc kubenswrapper[4836]: E0217 14:08:07.567766 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:07 crc kubenswrapper[4836]: I0217 14:08:07.567912 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:07 crc kubenswrapper[4836]: E0217 14:08:07.567987 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:09 crc kubenswrapper[4836]: I0217 14:08:09.567317 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:09 crc kubenswrapper[4836]: I0217 14:08:09.567319 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:09 crc kubenswrapper[4836]: E0217 14:08:09.567456 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:09 crc kubenswrapper[4836]: I0217 14:08:09.567536 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:09 crc kubenswrapper[4836]: I0217 14:08:09.567536 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:09 crc kubenswrapper[4836]: E0217 14:08:09.567662 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:09 crc kubenswrapper[4836]: E0217 14:08:09.567763 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:09 crc kubenswrapper[4836]: E0217 14:08:09.567825 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.202919 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/1.log" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.203740 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/0.log" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.203794 4836 generic.go:334] "Generic (PLEG): container finished" podID="592aa549-1b1b-441e-93e4-0821e05ff2b2" containerID="b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41" exitCode=1 Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.203853 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c76cc" event={"ID":"592aa549-1b1b-441e-93e4-0821e05ff2b2","Type":"ContainerDied","Data":"b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41"} Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.203902 4836 scope.go:117] "RemoveContainer" containerID="d56329a484e41f44539eb8ca01b461cc6f79eebf756350b0f49c47c0431e8abc" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.205132 4836 scope.go:117] "RemoveContainer" containerID="b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41" Feb 17 14:08:11 crc kubenswrapper[4836]: E0217 14:08:11.205749 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-c76cc_openshift-multus(592aa549-1b1b-441e-93e4-0821e05ff2b2)\"" pod="openshift-multus/multus-c76cc" podUID="592aa549-1b1b-441e-93e4-0821e05ff2b2" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.567247 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.567402 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.567442 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.567471 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:11 crc kubenswrapper[4836]: E0217 14:08:11.568057 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:11 crc kubenswrapper[4836]: E0217 14:08:11.568241 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:11 crc kubenswrapper[4836]: E0217 14:08:11.568610 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:11 crc kubenswrapper[4836]: E0217 14:08:11.568751 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:11 crc kubenswrapper[4836]: I0217 14:08:11.568915 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:08:11 crc kubenswrapper[4836]: E0217 14:08:11.569079 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfznp_openshift-ovn-kubernetes(67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" Feb 17 14:08:12 crc kubenswrapper[4836]: I0217 14:08:12.208899 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/1.log" Feb 17 14:08:13 crc kubenswrapper[4836]: I0217 14:08:13.567912 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:13 crc kubenswrapper[4836]: I0217 14:08:13.567966 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:13 crc kubenswrapper[4836]: I0217 14:08:13.568096 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:13 crc kubenswrapper[4836]: E0217 14:08:13.568152 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:13 crc kubenswrapper[4836]: I0217 14:08:13.568183 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:13 crc kubenswrapper[4836]: E0217 14:08:13.568359 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:13 crc kubenswrapper[4836]: E0217 14:08:13.568473 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:13 crc kubenswrapper[4836]: E0217 14:08:13.568562 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:14 crc kubenswrapper[4836]: E0217 14:08:14.495950 4836 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 14:08:14 crc kubenswrapper[4836]: E0217 14:08:14.675999 4836 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:08:15 crc kubenswrapper[4836]: I0217 14:08:15.567751 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:15 crc kubenswrapper[4836]: I0217 14:08:15.567750 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:15 crc kubenswrapper[4836]: I0217 14:08:15.567795 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:15 crc kubenswrapper[4836]: I0217 14:08:15.568528 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:15 crc kubenswrapper[4836]: E0217 14:08:15.568742 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:15 crc kubenswrapper[4836]: E0217 14:08:15.568819 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:15 crc kubenswrapper[4836]: E0217 14:08:15.568964 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:15 crc kubenswrapper[4836]: E0217 14:08:15.569158 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:17 crc kubenswrapper[4836]: I0217 14:08:17.568100 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:17 crc kubenswrapper[4836]: I0217 14:08:17.568128 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:17 crc kubenswrapper[4836]: I0217 14:08:17.568104 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:17 crc kubenswrapper[4836]: E0217 14:08:17.568256 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:17 crc kubenswrapper[4836]: I0217 14:08:17.568358 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:17 crc kubenswrapper[4836]: E0217 14:08:17.568466 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:17 crc kubenswrapper[4836]: E0217 14:08:17.568611 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:17 crc kubenswrapper[4836]: E0217 14:08:17.568712 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:19 crc kubenswrapper[4836]: I0217 14:08:19.567853 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:19 crc kubenswrapper[4836]: I0217 14:08:19.567862 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:19 crc kubenswrapper[4836]: I0217 14:08:19.567888 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:19 crc kubenswrapper[4836]: I0217 14:08:19.567894 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:19 crc kubenswrapper[4836]: E0217 14:08:19.568691 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:19 crc kubenswrapper[4836]: E0217 14:08:19.568774 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:19 crc kubenswrapper[4836]: E0217 14:08:19.568871 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:19 crc kubenswrapper[4836]: E0217 14:08:19.568928 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:19 crc kubenswrapper[4836]: E0217 14:08:19.677194 4836 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:08:21 crc kubenswrapper[4836]: I0217 14:08:21.567355 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:21 crc kubenswrapper[4836]: I0217 14:08:21.567420 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:21 crc kubenswrapper[4836]: I0217 14:08:21.567460 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:21 crc kubenswrapper[4836]: I0217 14:08:21.567443 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:21 crc kubenswrapper[4836]: E0217 14:08:21.567500 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:21 crc kubenswrapper[4836]: E0217 14:08:21.567583 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:21 crc kubenswrapper[4836]: E0217 14:08:21.567649 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:21 crc kubenswrapper[4836]: E0217 14:08:21.567780 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:22 crc kubenswrapper[4836]: I0217 14:08:22.568526 4836 scope.go:117] "RemoveContainer" containerID="b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41" Feb 17 14:08:23 crc kubenswrapper[4836]: I0217 14:08:23.245253 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/1.log" Feb 17 14:08:23 crc kubenswrapper[4836]: I0217 14:08:23.245586 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c76cc" event={"ID":"592aa549-1b1b-441e-93e4-0821e05ff2b2","Type":"ContainerStarted","Data":"d7051348fa11415bbd3ca42ccce04342cfc29fef1e5015e7fedf40514e49824c"} Feb 17 14:08:23 crc kubenswrapper[4836]: I0217 14:08:23.567183 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:23 crc kubenswrapper[4836]: I0217 14:08:23.567223 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:23 crc kubenswrapper[4836]: I0217 14:08:23.567191 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:23 crc kubenswrapper[4836]: E0217 14:08:23.567363 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:23 crc kubenswrapper[4836]: I0217 14:08:23.567327 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:23 crc kubenswrapper[4836]: E0217 14:08:23.567437 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:23 crc kubenswrapper[4836]: E0217 14:08:23.567589 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:23 crc kubenswrapper[4836]: E0217 14:08:23.567840 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:24 crc kubenswrapper[4836]: E0217 14:08:24.677731 4836 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:08:25 crc kubenswrapper[4836]: I0217 14:08:25.568024 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:25 crc kubenswrapper[4836]: I0217 14:08:25.568062 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:25 crc kubenswrapper[4836]: I0217 14:08:25.568020 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:25 crc kubenswrapper[4836]: E0217 14:08:25.568225 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:25 crc kubenswrapper[4836]: E0217 14:08:25.568432 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:25 crc kubenswrapper[4836]: E0217 14:08:25.568559 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:25 crc kubenswrapper[4836]: I0217 14:08:25.568671 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:25 crc kubenswrapper[4836]: E0217 14:08:25.568771 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:26 crc kubenswrapper[4836]: I0217 14:08:26.568369 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.261372 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/3.log" Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.263710 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerStarted","Data":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.264190 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.293578 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podStartSLOduration=112.293540766 podStartE2EDuration="1m52.293540766s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:27.290702281 +0000 UTC m=+133.633630590" watchObservedRunningTime="2026-02-17 14:08:27.293540766 +0000 UTC m=+133.636469055" Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.464261 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c4txt"] Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.464457 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:27 crc kubenswrapper[4836]: E0217 14:08:27.464620 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.567238 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.567334 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:27 crc kubenswrapper[4836]: I0217 14:08:27.567238 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:27 crc kubenswrapper[4836]: E0217 14:08:27.567417 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:27 crc kubenswrapper[4836]: E0217 14:08:27.567526 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:27 crc kubenswrapper[4836]: E0217 14:08:27.567587 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:29 crc kubenswrapper[4836]: I0217 14:08:29.567936 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:29 crc kubenswrapper[4836]: I0217 14:08:29.568033 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:29 crc kubenswrapper[4836]: I0217 14:08:29.568045 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:29 crc kubenswrapper[4836]: E0217 14:08:29.568622 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c4txt" podUID="8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c" Feb 17 14:08:29 crc kubenswrapper[4836]: E0217 14:08:29.568445 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 14:08:29 crc kubenswrapper[4836]: I0217 14:08:29.568108 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:29 crc kubenswrapper[4836]: E0217 14:08:29.568796 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 14:08:29 crc kubenswrapper[4836]: E0217 14:08:29.568810 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.567511 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.567564 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.567536 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.567625 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.570676 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.570831 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.570858 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.571311 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.571507 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 14:08:31 crc kubenswrapper[4836]: I0217 14:08:31.572488 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.271098 4836 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.343104 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cnq25"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.344063 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jjmwc"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.344670 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.344780 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.345152 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5l6x4"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.346280 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.346730 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.347192 4836 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.347406 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.347573 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.348710 4836 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.348764 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.348935 4836 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.348971 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.348989 4836 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.349026 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.349040 4836 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.349071 4836 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.349074 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.348943 4836 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.349104 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.349120 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.349131 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz"] Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.349179 4836 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.349194 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.349234 4836 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.349248 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.349651 4836 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.349691 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.350342 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.351216 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.351698 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.352613 4836 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.352652 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.352716 4836 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.352732 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353092 4836 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.353123 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353101 4836 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.353178 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353177 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353192 4836 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353139 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353271 4836 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353289 4836 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353317 4836 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353208 4836 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353322 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353330 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353341 4836 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353365 4836 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.353363 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353288 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353340 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353372 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353377 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353338 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353421 4836 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353451 4836 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353470 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.353481 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353477 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353544 4836 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353590 4836 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353602 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353612 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.353615 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.353814 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 14:08:34 crc kubenswrapper[4836]: W0217 14:08:34.353865 4836 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 17 14:08:34 crc kubenswrapper[4836]: E0217 14:08:34.353885 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.353943 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.355177 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.355762 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.356958 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.357352 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.357639 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.357794 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.359117 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.359262 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.359261 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.359524 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.359382 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.359397 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.362397 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kcm8s"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.363190 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.365768 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.366059 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.366258 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.366379 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.371003 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.378741 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5cbbv"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.379270 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.380512 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-98frx"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.390109 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.391837 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.392146 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.392406 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.460698 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.461902 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.462591 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.463412 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.463787 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.464984 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6rsds"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.466245 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.466863 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.479998 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7klmp"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.481459 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.482097 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.490541 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.492237 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.492869 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8sd2q"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.493379 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.493802 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.494262 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.497028 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.502472 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.502893 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.503390 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.510977 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.511390 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.511446 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.512052 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.512345 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513128 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513168 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7qrb\" (UniqueName: \"kubernetes.io/projected/66402e53-3287-45c4-bceb-78fc99836c5b-kube-api-access-q7qrb\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513195 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513218 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2840702b-d22f-4184-bada-4cd337d79407-config\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513308 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2840702b-d22f-4184-bada-4cd337d79407-trusted-ca\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513338 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-client\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513560 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513586 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgf7\" (UniqueName: \"kubernetes.io/projected/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-kube-api-access-rfgf7\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513611 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66402e53-3287-45c4-bceb-78fc99836c5b-node-pullsecrets\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513649 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513674 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513718 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66402e53-3287-45c4-bceb-78fc99836c5b-audit-dir\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513741 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhngg\" (UniqueName: \"kubernetes.io/projected/5ad14aa6-962d-4f8f-babe-745f65d63560-kube-api-access-fhngg\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513766 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513791 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513844 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/444e52ba-f376-40d9-b32f-aa5b523e4134-auth-proxy-config\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513876 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-client-ca\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513903 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dptq7\" (UniqueName: \"kubernetes.io/projected/444e52ba-f376-40d9-b32f-aa5b523e4134-kube-api-access-dptq7\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513928 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-audit\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513954 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrncr\" (UniqueName: \"kubernetes.io/projected/95872171-94c1-4b8a-935f-ae180a4e3d11-kube-api-access-wrncr\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.513980 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-service-ca-bundle\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514002 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514036 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjcv7\" (UniqueName: \"kubernetes.io/projected/d9eb5c8b-f3c7-4068-82c7-28520f6905c6-kube-api-access-gjcv7\") pod \"downloads-7954f5f757-5cbbv\" (UID: \"d9eb5c8b-f3c7-4068-82c7-28520f6905c6\") " pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514077 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95872171-94c1-4b8a-935f-ae180a4e3d11-serving-cert\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514100 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-serving-cert\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514137 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514161 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjskd\" (UniqueName: \"kubernetes.io/projected/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-kube-api-access-xjskd\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514195 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-config\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514214 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2840702b-d22f-4184-bada-4cd337d79407-serving-cert\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514245 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.514266 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444e52ba-f376-40d9-b32f-aa5b523e4134-config\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.515970 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.516398 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.516819 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517092 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517385 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517592 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-client-ca\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517659 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-serving-cert\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517696 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517709 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wqh\" (UniqueName: \"kubernetes.io/projected/2840702b-d22f-4184-bada-4cd337d79407-kube-api-access-z2wqh\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517737 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/444e52ba-f376-40d9-b32f-aa5b523e4134-machine-approver-tls\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517822 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517819 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-audit-dir\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517903 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517972 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.517997 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-etcd-client\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518056 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6ml\" (UniqueName: \"kubernetes.io/projected/8c77bcf1-4025-4c35-9580-41e9a61195e8-kube-api-access-nj6ml\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518082 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-config\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518103 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-audit-policies\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518154 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad14aa6-962d-4f8f-babe-745f65d63560-serving-cert\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518179 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518228 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518251 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-encryption-config\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518890 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.521509 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6zspj"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.518972 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519002 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519073 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519356 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519485 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519516 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519540 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519812 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519832 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519879 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.519940 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.520163 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.520215 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.520323 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.520762 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.521386 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.516043 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.521479 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.521626 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.521707 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.524210 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.524770 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.524916 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.525427 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.525945 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khbdr"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.532418 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.533189 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.526632 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.526748 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.526828 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.526512 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.532813 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.528634 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.529561 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.529561 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.529615 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.529740 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.529851 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.530000 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.530141 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.538266 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.539089 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.556664 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.556986 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.557344 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.557679 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.560860 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.562066 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.562530 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.580440 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.586555 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.588247 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.591032 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.592321 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.594353 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.597343 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.597750 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.597944 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.598537 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ch9j6"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.599049 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.599107 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-72n7k"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.599157 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.599194 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.600168 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.600602 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.600741 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.602036 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fqzrl"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.602582 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.603059 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.603409 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.604597 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.606545 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.606573 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.606711 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5vhz9"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.607230 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.607928 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.609215 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.609346 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jjmwc"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.610454 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.611010 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.611693 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-98frx"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.612671 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.613682 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.615878 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cnq25"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.616017 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5cbbv"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.617352 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7klmp"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.618789 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8sd2q"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619068 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619108 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619132 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-encryption-config\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619159 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2840702b-d22f-4184-bada-4cd337d79407-config\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619186 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619215 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619238 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7qrb\" (UniqueName: \"kubernetes.io/projected/66402e53-3287-45c4-bceb-78fc99836c5b-kube-api-access-q7qrb\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619261 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619312 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-metrics-tls\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619339 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619364 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2840702b-d22f-4184-bada-4cd337d79407-trusted-ca\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619386 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-policies\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619411 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-config\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619441 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ec8466-f311-4f81-ae38-48635b000ced-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619468 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-client\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619495 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619525 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-trusted-ca\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619550 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgf7\" (UniqueName: \"kubernetes.io/projected/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-kube-api-access-rfgf7\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619571 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66402e53-3287-45c4-bceb-78fc99836c5b-node-pullsecrets\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619596 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619705 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8wv\" (UniqueName: \"kubernetes.io/projected/c26f912f-f640-4b4c-ab61-dd2a163f12ab-kube-api-access-qb8wv\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619743 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619768 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkc7h\" (UniqueName: \"kubernetes.io/projected/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-kube-api-access-lkc7h\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619809 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66402e53-3287-45c4-bceb-78fc99836c5b-audit-dir\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619834 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhngg\" (UniqueName: \"kubernetes.io/projected/5ad14aa6-962d-4f8f-babe-745f65d63560-kube-api-access-fhngg\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619858 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3f2789-dd41-4e95-8174-db3a40098b0e-config\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619881 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-dir\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619903 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619927 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.619958 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/921ecdc3-b5f3-44e4-9300-d25342d944d8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jf2hd\" (UID: \"921ecdc3-b5f3-44e4-9300-d25342d944d8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620013 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/444e52ba-f376-40d9-b32f-aa5b523e4134-auth-proxy-config\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620039 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-ca\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620079 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcvr7\" (UniqueName: \"kubernetes.io/projected/f8080d32-cbe7-4b02-8791-d9f1f9aca269-kube-api-access-dcvr7\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620105 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-client-ca\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620129 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptq7\" (UniqueName: \"kubernetes.io/projected/444e52ba-f376-40d9-b32f-aa5b523e4134-kube-api-access-dptq7\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620162 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-audit\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620187 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620232 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ec8466-f311-4f81-ae38-48635b000ced-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620269 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620313 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620342 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrncr\" (UniqueName: \"kubernetes.io/projected/95872171-94c1-4b8a-935f-ae180a4e3d11-kube-api-access-wrncr\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620544 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65684d1d-5242-464d-8caf-ad4866bf6a86-serving-cert\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620565 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620590 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-service-ca-bundle\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620611 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620634 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjcv7\" (UniqueName: \"kubernetes.io/projected/d9eb5c8b-f3c7-4068-82c7-28520f6905c6-kube-api-access-gjcv7\") pod \"downloads-7954f5f757-5cbbv\" (UID: \"d9eb5c8b-f3c7-4068-82c7-28520f6905c6\") " pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620654 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/171e2af0-2993-4cd3-942f-043bccca2813-metrics-tls\") pod \"dns-operator-744455d44c-8sd2q\" (UID: \"171e2af0-2993-4cd3-942f-043bccca2813\") " pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620688 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95872171-94c1-4b8a-935f-ae180a4e3d11-serving-cert\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620724 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-serving-cert\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620745 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a3f2789-dd41-4e95-8174-db3a40098b0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620784 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c26f912f-f640-4b4c-ab61-dd2a163f12ab-serving-cert\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620807 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8080d32-cbe7-4b02-8791-d9f1f9aca269-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620838 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620873 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjskd\" (UniqueName: \"kubernetes.io/projected/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-kube-api-access-xjskd\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620902 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-client\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620926 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-service-ca\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620951 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620974 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htjx\" (UniqueName: \"kubernetes.io/projected/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-kube-api-access-6htjx\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.620999 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-config\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621023 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2840702b-d22f-4184-bada-4cd337d79407-serving-cert\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621045 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621080 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444e52ba-f376-40d9-b32f-aa5b523e4134-config\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621109 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621142 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z42f\" (UniqueName: \"kubernetes.io/projected/171e2af0-2993-4cd3-942f-043bccca2813-kube-api-access-5z42f\") pod \"dns-operator-744455d44c-8sd2q\" (UID: \"171e2af0-2993-4cd3-942f-043bccca2813\") " pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621169 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-client-ca\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621205 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8080d32-cbe7-4b02-8791-d9f1f9aca269-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621229 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-serving-cert\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621250 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2wqh\" (UniqueName: \"kubernetes.io/projected/2840702b-d22f-4184-bada-4cd337d79407-kube-api-access-z2wqh\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621263 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2840702b-d22f-4184-bada-4cd337d79407-config\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621273 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/444e52ba-f376-40d9-b32f-aa5b523e4134-machine-approver-tls\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621349 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66402e53-3287-45c4-bceb-78fc99836c5b-node-pullsecrets\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.621402 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5l6x4"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.622705 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-service-ca-bundle\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.622742 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.622950 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66402e53-3287-45c4-bceb-78fc99836c5b-audit-dir\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.623527 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9mxp\" (UniqueName: \"kubernetes.io/projected/921ecdc3-b5f3-44e4-9300-d25342d944d8-kube-api-access-n9mxp\") pod \"cluster-samples-operator-665b6dd947-jf2hd\" (UID: \"921ecdc3-b5f3-44e4-9300-d25342d944d8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.623565 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.623614 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.623670 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.623702 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.623765 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-audit-dir\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648008 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444e52ba-f376-40d9-b32f-aa5b523e4134-config\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.646361 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.623820 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/444e52ba-f376-40d9-b32f-aa5b523e4134-auth-proxy-config\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.647453 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648117 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.647896 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-audit-dir\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648253 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a3f2789-dd41-4e95-8174-db3a40098b0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648334 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95872171-94c1-4b8a-935f-ae180a4e3d11-config\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648382 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-client-ca\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648413 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648509 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648669 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2840702b-d22f-4184-bada-4cd337d79407-trusted-ca\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.650888 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95872171-94c1-4b8a-935f-ae180a4e3d11-serving-cert\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.652180 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2840702b-d22f-4184-bada-4cd337d79407-serving-cert\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.652478 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-client-ca\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.652565 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.652817 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/444e52ba-f376-40d9-b32f-aa5b523e4134-machine-approver-tls\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.653666 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.660748 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-encryption-config\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.648596 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6jrc\" (UniqueName: \"kubernetes.io/projected/18ec8466-f311-4f81-ae38-48635b000ced-kube-api-access-g6jrc\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.664229 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-etcd-client\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.664274 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c26f912f-f640-4b4c-ab61-dd2a163f12ab-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.664326 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6ml\" (UniqueName: \"kubernetes.io/projected/8c77bcf1-4025-4c35-9580-41e9a61195e8-kube-api-access-nj6ml\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.664353 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-config\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.664379 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkl9p\" (UniqueName: \"kubernetes.io/projected/65684d1d-5242-464d-8caf-ad4866bf6a86-kube-api-access-jkl9p\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.664407 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-audit-policies\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.664429 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad14aa6-962d-4f8f-babe-745f65d63560-serving-cert\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.665523 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-audit-policies\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.666464 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-config\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.667493 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.667917 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-etcd-client\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.671200 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.672556 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad14aa6-962d-4f8f-babe-745f65d63560-serving-cert\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.675334 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.676851 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-284hg"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.677596 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-284hg" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.679218 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6zspj"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.681169 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.682218 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.683308 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.684342 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-72n7k"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.685286 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.686915 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6scjm"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.687623 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.688076 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.688397 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kcm8s"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.690377 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6rsds"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.691485 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.692832 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.694255 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.695592 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.696763 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khbdr"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.698671 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.699666 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.701335 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.702149 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-284hg"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.703386 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6scjm"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.704561 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.705926 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.706876 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.707580 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ch9j6"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.708697 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.709767 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.710953 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.712082 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5vhz9"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.716160 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9kmt4"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.716845 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9kmt4" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.717511 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8ngwr"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.718704 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9kmt4"] Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.718769 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.727255 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.746597 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.765054 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-metrics-tls\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.765087 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-policies\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.765105 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.765146 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-config\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.765161 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-trusted-ca\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.765176 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ec8466-f311-4f81-ae38-48635b000ced-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.765822 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-config\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.766377 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-trusted-ca\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.765934 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d30a99f-a727-4eb4-9a32-0508707384bf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.766490 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkc7h\" (UniqueName: \"kubernetes.io/projected/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-kube-api-access-lkc7h\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.766510 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb8wv\" (UniqueName: \"kubernetes.io/projected/c26f912f-f640-4b4c-ab61-dd2a163f12ab-kube-api-access-qb8wv\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.766552 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3f2789-dd41-4e95-8174-db3a40098b0e-config\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.766613 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-policies\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.767078 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3f2789-dd41-4e95-8174-db3a40098b0e-config\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.767220 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.767492 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-dir\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.767536 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/921ecdc3-b5f3-44e4-9300-d25342d944d8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jf2hd\" (UID: \"921ecdc3-b5f3-44e4-9300-d25342d944d8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.767563 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-ca\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.767960 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcvr7\" (UniqueName: \"kubernetes.io/projected/f8080d32-cbe7-4b02-8791-d9f1f9aca269-kube-api-access-dcvr7\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.767982 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/331c189b-0cb5-4733-a233-894429c709a9-srv-cert\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768002 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768051 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65684d1d-5242-464d-8caf-ad4866bf6a86-serving-cert\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768068 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ec8466-f311-4f81-ae38-48635b000ced-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768086 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768102 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768126 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/331c189b-0cb5-4733-a233-894429c709a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768147 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768171 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/171e2af0-2993-4cd3-942f-043bccca2813-metrics-tls\") pod \"dns-operator-744455d44c-8sd2q\" (UID: \"171e2af0-2993-4cd3-942f-043bccca2813\") " pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768187 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768203 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xj5j\" (UniqueName: \"kubernetes.io/projected/331c189b-0cb5-4733-a233-894429c709a9-kube-api-access-6xj5j\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768225 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a3f2789-dd41-4e95-8174-db3a40098b0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768248 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-client\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768265 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c26f912f-f640-4b4c-ab61-dd2a163f12ab-serving-cert\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768281 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8080d32-cbe7-4b02-8791-d9f1f9aca269-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768359 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-service-ca\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768388 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768410 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6htjx\" (UniqueName: \"kubernetes.io/projected/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-kube-api-access-6htjx\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768434 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d30a99f-a727-4eb4-9a32-0508707384bf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768456 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z42f\" (UniqueName: \"kubernetes.io/projected/171e2af0-2993-4cd3-942f-043bccca2813-kube-api-access-5z42f\") pod \"dns-operator-744455d44c-8sd2q\" (UID: \"171e2af0-2993-4cd3-942f-043bccca2813\") " pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768477 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768505 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768631 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-ca\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768522 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8080d32-cbe7-4b02-8791-d9f1f9aca269-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.767592 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-dir\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768930 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9mxp\" (UniqueName: \"kubernetes.io/projected/921ecdc3-b5f3-44e4-9300-d25342d944d8-kube-api-access-n9mxp\") pod \"cluster-samples-operator-665b6dd947-jf2hd\" (UID: \"921ecdc3-b5f3-44e4-9300-d25342d944d8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768946 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a3f2789-dd41-4e95-8174-db3a40098b0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768962 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768979 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.768994 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769012 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6jrc\" (UniqueName: \"kubernetes.io/projected/18ec8466-f311-4f81-ae38-48635b000ced-kube-api-access-g6jrc\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769050 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqh4c\" (UniqueName: \"kubernetes.io/projected/19216a1e-34af-4764-a621-e5097db4751b-kube-api-access-bqh4c\") pod \"multus-admission-controller-857f4d67dd-ch9j6\" (UID: \"19216a1e-34af-4764-a621-e5097db4751b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769088 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crtnq\" (UniqueName: \"kubernetes.io/projected/1d30a99f-a727-4eb4-9a32-0508707384bf-kube-api-access-crtnq\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769105 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c26f912f-f640-4b4c-ab61-dd2a163f12ab-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769150 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769168 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrbt\" (UniqueName: \"kubernetes.io/projected/985bc83c-52fa-45dc-ab4f-6e47ee47683e-kube-api-access-sfrbt\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769184 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkl9p\" (UniqueName: \"kubernetes.io/projected/65684d1d-5242-464d-8caf-ad4866bf6a86-kube-api-access-jkl9p\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769200 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19216a1e-34af-4764-a621-e5097db4751b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ch9j6\" (UID: \"19216a1e-34af-4764-a621-e5097db4751b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.769243 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.770322 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.770768 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.771355 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.771556 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ec8466-f311-4f81-ae38-48635b000ced-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.771864 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-service-ca\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.772312 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-metrics-tls\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.772609 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c26f912f-f640-4b4c-ab61-dd2a163f12ab-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.773332 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ec8466-f311-4f81-ae38-48635b000ced-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.773409 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.775284 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.775713 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.776220 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/921ecdc3-b5f3-44e4-9300-d25342d944d8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jf2hd\" (UID: \"921ecdc3-b5f3-44e4-9300-d25342d944d8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.778721 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.779631 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65684d1d-5242-464d-8caf-ad4866bf6a86-serving-cert\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.780971 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8080d32-cbe7-4b02-8791-d9f1f9aca269-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.781998 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/171e2af0-2993-4cd3-942f-043bccca2813-metrics-tls\") pod \"dns-operator-744455d44c-8sd2q\" (UID: \"171e2af0-2993-4cd3-942f-043bccca2813\") " pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.782129 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.782281 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a3f2789-dd41-4e95-8174-db3a40098b0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.782410 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.782412 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.784832 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c26f912f-f640-4b4c-ab61-dd2a163f12ab-serving-cert\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.786829 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.786920 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65684d1d-5242-464d-8caf-ad4866bf6a86-etcd-client\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.795365 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.806620 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.826862 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.854176 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.866746 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.869887 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqh4c\" (UniqueName: \"kubernetes.io/projected/19216a1e-34af-4764-a621-e5097db4751b-kube-api-access-bqh4c\") pod \"multus-admission-controller-857f4d67dd-ch9j6\" (UID: \"19216a1e-34af-4764-a621-e5097db4751b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.869943 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrbt\" (UniqueName: \"kubernetes.io/projected/985bc83c-52fa-45dc-ab4f-6e47ee47683e-kube-api-access-sfrbt\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.869969 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crtnq\" (UniqueName: \"kubernetes.io/projected/1d30a99f-a727-4eb4-9a32-0508707384bf-kube-api-access-crtnq\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.870014 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.870052 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19216a1e-34af-4764-a621-e5097db4751b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ch9j6\" (UID: \"19216a1e-34af-4764-a621-e5097db4751b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.870148 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d30a99f-a727-4eb4-9a32-0508707384bf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.870235 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/331c189b-0cb5-4733-a233-894429c709a9-srv-cert\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.870280 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/331c189b-0cb5-4733-a233-894429c709a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.870336 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.870358 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xj5j\" (UniqueName: \"kubernetes.io/projected/331c189b-0cb5-4733-a233-894429c709a9-kube-api-access-6xj5j\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.870421 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d30a99f-a727-4eb4-9a32-0508707384bf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.871592 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8080d32-cbe7-4b02-8791-d9f1f9aca269-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.886650 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.906665 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.927446 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.946698 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.967956 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.973809 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/331c189b-0cb5-4733-a233-894429c709a9-srv-cert\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.987065 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 14:08:34 crc kubenswrapper[4836]: I0217 14:08:34.993420 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/331c189b-0cb5-4733-a233-894429c709a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.007347 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.027385 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.047396 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.066720 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.087635 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.107605 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.127427 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.134429 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.147928 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.173849 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.183009 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.187142 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.207879 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.227049 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.246871 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.266737 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.286640 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.327145 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.347259 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.366743 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.387388 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.406886 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.427260 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.433175 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19216a1e-34af-4764-a621-e5097db4751b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ch9j6\" (UID: \"19216a1e-34af-4764-a621-e5097db4751b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.446760 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.467184 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.488272 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.508430 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.527600 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.548516 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.568350 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.587766 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.605589 4836 request.go:700] Waited for 1.004541941s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0 Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.607356 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.620179 4836 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.620376 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images podName:1ecc7c98-e9a3-4850-a741-7e0bcf670e27 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.120349395 +0000 UTC m=+142.463277664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images") pod "machine-api-operator-5694c8668f-jjmwc" (UID: "1ecc7c98-e9a3-4850-a741-7e0bcf670e27") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622446 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622473 4836 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622475 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622536 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622507 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.122492581 +0000 UTC m=+142.465420850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622573 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-trusted-ca-bundle podName:e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.122566813 +0000 UTC m=+142.465495082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-trusted-ca-bundle") pod "apiserver-7bbb656c7d-z6h7n" (UID: "e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622584 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-audit podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.122578454 +0000 UTC m=+142.465506723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-audit") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622597 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.122591314 +0000 UTC m=+142.465519573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622599 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.622703 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.122670236 +0000 UTC m=+142.465598555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.624010 4836 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.624044 4836 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.624115 4836 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.624134 4836 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.624140 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls podName:1ecc7c98-e9a3-4850-a741-7e0bcf670e27 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.124106454 +0000 UTC m=+142.467034763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-jjmwc" (UID: "1ecc7c98-e9a3-4850-a741-7e0bcf670e27") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.624196 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.124180956 +0000 UTC m=+142.467109215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.624211 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config podName:8c77bcf1-4025-4c35-9580-41e9a61195e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.124202906 +0000 UTC m=+142.467131165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config") pod "controller-manager-879f6c89f-5l6x4" (UID: "8c77bcf1-4025-4c35-9580-41e9a61195e8") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.624243 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-serving-cert podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.124221097 +0000 UTC m=+142.467149436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-serving-cert") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.626503 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.646391 4836 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.646536 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles podName:8c77bcf1-4025-4c35-9580-41e9a61195e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.146501549 +0000 UTC m=+142.489429858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles") pod "controller-manager-879f6c89f-5l6x4" (UID: "8c77bcf1-4025-4c35-9580-41e9a61195e8") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.646756 4836 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.646864 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-client podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.146835857 +0000 UTC m=+142.489764166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-client") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.647283 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.647453 4836 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.647514 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-serving-cert podName:e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.147500775 +0000 UTC m=+142.490429064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-serving-cert") pod "apiserver-7bbb656c7d-z6h7n" (UID: "e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.648760 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.648804 4836 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.648821 4836 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.648874 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.148845621 +0000 UTC m=+142.491773930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.648917 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert podName:8c77bcf1-4025-4c35-9580-41e9a61195e8 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.148905133 +0000 UTC m=+142.491833442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert") pod "controller-manager-879f6c89f-5l6x4" (UID: "8c77bcf1-4025-4c35-9580-41e9a61195e8") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.648938 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config podName:1ecc7c98-e9a3-4850-a741-7e0bcf670e27 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.148928573 +0000 UTC m=+142.491856882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config") pod "machine-api-operator-5694c8668f-jjmwc" (UID: "1ecc7c98-e9a3-4850-a741-7e0bcf670e27") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.667008 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.693060 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.707564 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.727221 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.747922 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.767167 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.788778 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.807051 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.827551 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.847749 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.869125 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.870783 4836 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.870849 4836 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.870912 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d30a99f-a727-4eb4-9a32-0508707384bf-serving-cert podName:1d30a99f-a727-4eb4-9a32-0508707384bf nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.370876326 +0000 UTC m=+142.713804635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d30a99f-a727-4eb4-9a32-0508707384bf-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-dsrq8" (UID: "1d30a99f-a727-4eb4-9a32-0508707384bf") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: E0217 14:08:35.870952 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d30a99f-a727-4eb4-9a32-0508707384bf-config podName:1d30a99f-a727-4eb4-9a32-0508707384bf nodeName:}" failed. No retries permitted until 2026-02-17 14:08:36.370935778 +0000 UTC m=+142.713864087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d30a99f-a727-4eb4-9a32-0508707384bf-config") pod "openshift-apiserver-operator-796bbdcf4f-dsrq8" (UID: "1d30a99f-a727-4eb4-9a32-0508707384bf") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.887732 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.907138 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.927607 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.947529 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.967136 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 14:08:35 crc kubenswrapper[4836]: I0217 14:08:35.987915 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.007138 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.027516 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.047584 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.066866 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.087138 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.107436 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.127480 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.146828 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187068 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187111 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187150 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-audit\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187181 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187215 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-serving-cert\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187245 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187312 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187350 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-serving-cert\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187385 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187448 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187841 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187881 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187910 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187946 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.187968 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-client\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.188009 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.202465 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dptq7\" (UniqueName: \"kubernetes.io/projected/444e52ba-f376-40d9-b32f-aa5b523e4134-kube-api-access-dptq7\") pod \"machine-approver-56656f9798-cwlxz\" (UID: \"444e52ba-f376-40d9-b32f-aa5b523e4134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.224253 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrncr\" (UniqueName: \"kubernetes.io/projected/95872171-94c1-4b8a-935f-ae180a4e3d11-kube-api-access-wrncr\") pod \"authentication-operator-69f744f599-98frx\" (UID: \"95872171-94c1-4b8a-935f-ae180a4e3d11\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.241062 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2wqh\" (UniqueName: \"kubernetes.io/projected/2840702b-d22f-4184-bada-4cd337d79407-kube-api-access-z2wqh\") pod \"console-operator-58897d9998-kcm8s\" (UID: \"2840702b-d22f-4184-bada-4cd337d79407\") " pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.261769 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjcv7\" (UniqueName: \"kubernetes.io/projected/d9eb5c8b-f3c7-4068-82c7-28520f6905c6-kube-api-access-gjcv7\") pod \"downloads-7954f5f757-5cbbv\" (UID: \"d9eb5c8b-f3c7-4068-82c7-28520f6905c6\") " pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.289963 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhngg\" (UniqueName: \"kubernetes.io/projected/5ad14aa6-962d-4f8f-babe-745f65d63560-kube-api-access-fhngg\") pod \"route-controller-manager-6576b87f9c-2mmw4\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.361902 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.371377 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 14:08:36 crc kubenswrapper[4836]: W0217 14:08:36.381234 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod444e52ba_f376_40d9_b32f_aa5b523e4134.slice/crio-5870e23e3158b354b5696c829a5d80e0b5b48d934f8bae509eca81077c2a9882 WatchSource:0}: Error finding container 5870e23e3158b354b5696c829a5d80e0b5b48d934f8bae509eca81077c2a9882: Status 404 returned error can't find the container with id 5870e23e3158b354b5696c829a5d80e0b5b48d934f8bae509eca81077c2a9882 Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.387871 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.391777 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d30a99f-a727-4eb4-9a32-0508707384bf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.392922 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d30a99f-a727-4eb4-9a32-0508707384bf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.393484 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d30a99f-a727-4eb4-9a32-0508707384bf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.394899 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d30a99f-a727-4eb4-9a32-0508707384bf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.396214 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.405537 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.407869 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.414073 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.427469 4836 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.451600 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.468443 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.471539 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.487320 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.513066 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.528622 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.549577 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.567230 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.595126 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.623094 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.625404 4836 request.go:700] Waited for 1.858710819s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.647750 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb8wv\" (UniqueName: \"kubernetes.io/projected/c26f912f-f640-4b4c-ab61-dd2a163f12ab-kube-api-access-qb8wv\") pod \"openshift-config-operator-7777fb866f-2rnsr\" (UID: \"c26f912f-f640-4b4c-ab61-dd2a163f12ab\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.669046 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkc7h\" (UniqueName: \"kubernetes.io/projected/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-kube-api-access-lkc7h\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.700872 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a3f2789-dd41-4e95-8174-db3a40098b0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wd65t\" (UID: \"9a3f2789-dd41-4e95-8174-db3a40098b0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.707370 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcvr7\" (UniqueName: \"kubernetes.io/projected/f8080d32-cbe7-4b02-8791-d9f1f9aca269-kube-api-access-dcvr7\") pod \"kube-storage-version-migrator-operator-b67b599dd-rq6q9\" (UID: \"f8080d32-cbe7-4b02-8791-d9f1f9aca269\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.730417 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z42f\" (UniqueName: \"kubernetes.io/projected/171e2af0-2993-4cd3-942f-043bccca2813-kube-api-access-5z42f\") pod \"dns-operator-744455d44c-8sd2q\" (UID: \"171e2af0-2993-4cd3-942f-043bccca2813\") " pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.748536 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9mxp\" (UniqueName: \"kubernetes.io/projected/921ecdc3-b5f3-44e4-9300-d25342d944d8-kube-api-access-n9mxp\") pod \"cluster-samples-operator-665b6dd947-jf2hd\" (UID: \"921ecdc3-b5f3-44e4-9300-d25342d944d8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.783059 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bb5e0b8-9179-4570-a3d8-acaa80b2c884-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7htl2\" (UID: \"3bb5e0b8-9179-4570-a3d8-acaa80b2c884\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.786342 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6htjx\" (UniqueName: \"kubernetes.io/projected/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-kube-api-access-6htjx\") pod \"oauth-openshift-558db77b4-6rsds\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.791795 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.802164 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6jrc\" (UniqueName: \"kubernetes.io/projected/18ec8466-f311-4f81-ae38-48635b000ced-kube-api-access-g6jrc\") pod \"openshift-controller-manager-operator-756b6f6bc6-nncbw\" (UID: \"18ec8466-f311-4f81-ae38-48635b000ced\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.816624 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.822349 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.827713 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkl9p\" (UniqueName: \"kubernetes.io/projected/65684d1d-5242-464d-8caf-ad4866bf6a86-kube-api-access-jkl9p\") pod \"etcd-operator-b45778765-7klmp\" (UID: \"65684d1d-5242-464d-8caf-ad4866bf6a86\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.827908 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.836038 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.842645 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.850728 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.855829 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqh4c\" (UniqueName: \"kubernetes.io/projected/19216a1e-34af-4764-a621-e5097db4751b-kube-api-access-bqh4c\") pod \"multus-admission-controller-857f4d67dd-ch9j6\" (UID: \"19216a1e-34af-4764-a621-e5097db4751b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.857391 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5cbbv"] Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.863424 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crtnq\" (UniqueName: \"kubernetes.io/projected/1d30a99f-a727-4eb4-9a32-0508707384bf-kube-api-access-crtnq\") pod \"openshift-apiserver-operator-796bbdcf4f-dsrq8\" (UID: \"1d30a99f-a727-4eb4-9a32-0508707384bf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.863486 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.889340 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrbt\" (UniqueName: \"kubernetes.io/projected/985bc83c-52fa-45dc-ab4f-6e47ee47683e-kube-api-access-sfrbt\") pod \"marketplace-operator-79b997595-khbdr\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:36 crc kubenswrapper[4836]: W0217 14:08:36.897510 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9eb5c8b_f3c7_4068_82c7_28520f6905c6.slice/crio-a85018b0fd8f1071325c064f8c51af4a4dc2d8ec54e655fce3e24d025f5a1f07 WatchSource:0}: Error finding container a85018b0fd8f1071325c064f8c51af4a4dc2d8ec54e655fce3e24d025f5a1f07: Status 404 returned error can't find the container with id a85018b0fd8f1071325c064f8c51af4a4dc2d8ec54e655fce3e24d025f5a1f07 Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.904536 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xj5j\" (UniqueName: \"kubernetes.io/projected/331c189b-0cb5-4733-a233-894429c709a9-kube-api-access-6xj5j\") pod \"olm-operator-6b444d44fb-sknds\" (UID: \"331c189b-0cb5-4733-a233-894429c709a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.907020 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.910607 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.926778 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.942031 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.957087 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.968937 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.984778 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.986950 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.988032 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-serving-cert\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:36 crc kubenswrapper[4836]: I0217 14:08:36.995252 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6ml\" (UniqueName: \"kubernetes.io/projected/8c77bcf1-4025-4c35-9580-41e9a61195e8-kube-api-access-nj6ml\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.014551 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.027574 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.031957 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.039664 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.043838 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-serving-cert\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.049625 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6rsds"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.066864 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.088257 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.091647 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.102585 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.107051 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.107956 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-audit\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.109001 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4"] Feb 17 14:08:37 crc kubenswrapper[4836]: W0217 14:08:37.113128 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2d8fb42_9c68_4eb3_a8c9_4e4a98772ae7.slice/crio-f9b98c0ff2091be32d114061b6cc2daa5338c6afb2d30dae1e18fe2afc9b3ea3 WatchSource:0}: Error finding container f9b98c0ff2091be32d114061b6cc2daa5338c6afb2d30dae1e18fe2afc9b3ea3: Status 404 returned error can't find the container with id f9b98c0ff2091be32d114061b6cc2daa5338c6afb2d30dae1e18fe2afc9b3ea3 Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.116696 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kcm8s"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.129162 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.136089 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-98frx"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.137682 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.144585 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert\") pod \"controller-manager-879f6c89f-5l6x4\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.147125 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.154173 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:37 crc kubenswrapper[4836]: W0217 14:08:37.162812 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad14aa6_962d_4f8f_babe_745f65d63560.slice/crio-5a18cb47469c9084e91c362d0628474be4ea76582e846e1e93705e36c466141f WatchSource:0}: Error finding container 5a18cb47469c9084e91c362d0628474be4ea76582e846e1e93705e36c466141f: Status 404 returned error can't find the container with id 5a18cb47469c9084e91c362d0628474be4ea76582e846e1e93705e36c466141f Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.167507 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.169646 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9"] Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.187147 4836 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188002 4836 projected.go:194] Error preparing data for projected volume kube-api-access-rfgf7 for pod openshift-machine-api/machine-api-operator-5694c8668f-jjmwc: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188086 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-kube-api-access-rfgf7 podName:1ecc7c98-e9a3-4850-a741-7e0bcf670e27 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:37.688062438 +0000 UTC m=+144.030990707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rfgf7" (UniqueName: "kubernetes.io/projected/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-kube-api-access-rfgf7") pod "machine-api-operator-5694c8668f-jjmwc" (UID: "1ecc7c98-e9a3-4850-a741-7e0bcf670e27") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.187215 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188613 4836 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188694 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls podName:1ecc7c98-e9a3-4850-a741-7e0bcf670e27 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.188674135 +0000 UTC m=+144.531602464 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-jjmwc" (UID: "1ecc7c98-e9a3-4850-a741-7e0bcf670e27") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188757 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188782 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.188775947 +0000 UTC m=+144.531704216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188781 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188804 4836 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188824 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config podName:1ecc7c98-e9a3-4850-a741-7e0bcf670e27 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.188818958 +0000 UTC m=+144.531747227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config") pod "machine-api-operator-5694c8668f-jjmwc" (UID: "1ecc7c98-e9a3-4850-a741-7e0bcf670e27") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188848 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188900 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.188835909 +0000 UTC m=+144.531764218 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188925 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.188914741 +0000 UTC m=+144.531843110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.188968 4836 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.189006 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.188995293 +0000 UTC m=+144.531923622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync secret cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.189038 4836 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.189059 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images podName:1ecc7c98-e9a3-4850-a741-7e0bcf670e27 nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.189053515 +0000 UTC m=+144.531981784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images") pod "machine-api-operator-5694c8668f-jjmwc" (UID: "1ecc7c98-e9a3-4850-a741-7e0bcf670e27") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.189075 4836 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.189106 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config podName:66402e53-3287-45c4-bceb-78fc99836c5b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.189099486 +0000 UTC m=+144.532027855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config") pod "apiserver-76f77b778f-cnq25" (UID: "66402e53-3287-45c4-bceb-78fc99836c5b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.192214 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjskd\" (UniqueName: \"kubernetes.io/projected/e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40-kube-api-access-xjskd\") pod \"apiserver-7bbb656c7d-z6h7n\" (UID: \"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.201074 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-client\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.210310 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.222203 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.230213 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.234330 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.254983 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.269815 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.295061 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.306950 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.309603 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" event={"ID":"9a3f2789-dd41-4e95-8174-db3a40098b0e","Type":"ContainerStarted","Data":"d799c4391a6169e5f0f62b3e1b0b8dfd89a83d47ce55959b8154cdef4b995635"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.314902 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" event={"ID":"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7","Type":"ContainerStarted","Data":"f9b98c0ff2091be32d114061b6cc2daa5338c6afb2d30dae1e18fe2afc9b3ea3"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.319734 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" event={"ID":"f8080d32-cbe7-4b02-8791-d9f1f9aca269","Type":"ContainerStarted","Data":"1705268b5549ed86694d48833fc8eafd8038dbc2777f480409afc935da199ae6"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.322180 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kcm8s" event={"ID":"2840702b-d22f-4184-bada-4cd337d79407","Type":"ContainerStarted","Data":"134a9b18572422f88105c67f86dd3ba6359bba7958ffe80d7adf51cd875bec9c"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.324887 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" event={"ID":"444e52ba-f376-40d9-b32f-aa5b523e4134","Type":"ContainerStarted","Data":"ff1ce2a2ed81c25aa98d7d7035011726b226ae8015dadecc845c0d72af7c204d"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.324926 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" event={"ID":"444e52ba-f376-40d9-b32f-aa5b523e4134","Type":"ContainerStarted","Data":"a9f603d37f5106008f4b5191c2012e8d9b2c86e1c1a683e4a6edfbeff60eac0d"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.324939 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" event={"ID":"444e52ba-f376-40d9-b32f-aa5b523e4134","Type":"ContainerStarted","Data":"5870e23e3158b354b5696c829a5d80e0b5b48d934f8bae509eca81077c2a9882"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.327766 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" event={"ID":"95872171-94c1-4b8a-935f-ae180a4e3d11","Type":"ContainerStarted","Data":"18b274722755de1985dbce64b08d691afd2c8f098180eef0d46ca7be76106cef"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.328326 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.331843 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5cbbv" event={"ID":"d9eb5c8b-f3c7-4068-82c7-28520f6905c6","Type":"ContainerStarted","Data":"92b59bab9fd909d359405ecf217a49ab1de8122281a49768577c5a706060d118"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.331894 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5cbbv" event={"ID":"d9eb5c8b-f3c7-4068-82c7-28520f6905c6","Type":"ContainerStarted","Data":"a85018b0fd8f1071325c064f8c51af4a4dc2d8ec54e655fce3e24d025f5a1f07"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.332422 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.333737 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.333787 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.334263 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7qrb\" (UniqueName: \"kubernetes.io/projected/66402e53-3287-45c4-bceb-78fc99836c5b-kube-api-access-q7qrb\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.337099 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" event={"ID":"5ad14aa6-962d-4f8f-babe-745f65d63560","Type":"ContainerStarted","Data":"5a18cb47469c9084e91c362d0628474be4ea76582e846e1e93705e36c466141f"} Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.346668 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.370808 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.393845 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.410170 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.430947 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512107 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-trusted-ca\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512155 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/296ae94a-36e6-480b-9395-8f6a96621fdf-service-ca-bundle\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512186 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhp9d\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-kube-api-access-vhp9d\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512223 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-serving-cert\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512256 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-config\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512280 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1679c4a6-a707-4150-825b-5cb8b90cb27c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512321 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj8f\" (UniqueName: \"kubernetes.io/projected/628fd7f0-d4b6-4866-b7d4-6966ed698611-kube-api-access-njj8f\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512374 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvpk\" (UniqueName: \"kubernetes.io/projected/6d52104b-91e7-4a3a-9138-163eb850485d-kube-api-access-grvpk\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512408 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-serving-cert\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512459 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a506e2e-c940-4f10-b89c-948d10ba8902-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512513 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf64r\" (UniqueName: \"kubernetes.io/projected/83427963-071f-40a0-8988-b39a3d41e59f-kube-api-access-jf64r\") pod \"migrator-59844c95c7-gtjx7\" (UID: \"83427963-071f-40a0-8988-b39a3d41e59f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512549 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ad67d365-7ef5-406c-9ffe-6f66253704c9-profile-collector-cert\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512576 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cea58b47-da5e-4dc7-be23-19d8408318d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jhzxl\" (UID: \"cea58b47-da5e-4dc7-be23-19d8408318d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512608 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512672 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-bound-sa-token\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512712 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc9lz\" (UniqueName: \"kubernetes.io/projected/1ce6cce5-c0bb-4d10-8458-bb9e15832a9c-kube-api-access-vc9lz\") pod \"package-server-manager-789f6589d5-blr59\" (UID: \"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512747 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/628fd7f0-d4b6-4866-b7d4-6966ed698611-proxy-tls\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512799 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxf8g\" (UniqueName: \"kubernetes.io/projected/1a506e2e-c940-4f10-b89c-948d10ba8902-kube-api-access-qxf8g\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512824 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-tls\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512850 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4cd3f585-c95f-43ee-962c-ea33aff90415-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512874 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1679c4a6-a707-4150-825b-5cb8b90cb27c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.512989 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rsrv\" (UniqueName: \"kubernetes.io/projected/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-kube-api-access-7rsrv\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513015 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-config\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513038 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ad67d365-7ef5-406c-9ffe-6f66253704c9-srv-cert\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513060 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c758606a-b3e4-494e-a2a6-7a7320277b37-tmpfs\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513082 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91eb437c-beea-4f2d-b3f7-505b87fe6dee-config-volume\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513103 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-oauth-serving-cert\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513129 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjkns\" (UniqueName: \"kubernetes.io/projected/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-kube-api-access-fjkns\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513152 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/628fd7f0-d4b6-4866-b7d4-6966ed698611-images\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513174 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c758606a-b3e4-494e-a2a6-7a7320277b37-apiservice-cert\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513200 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4cd3f585-c95f-43ee-962c-ea33aff90415-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513254 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1679c4a6-a707-4150-825b-5cb8b90cb27c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513277 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a506e2e-c940-4f10-b89c-948d10ba8902-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513376 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r66m8\" (UniqueName: \"kubernetes.io/projected/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-kube-api-access-r66m8\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513448 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-signing-key\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513486 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-oauth-config\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513535 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-default-certificate\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513578 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6grq\" (UniqueName: \"kubernetes.io/projected/cea58b47-da5e-4dc7-be23-19d8408318d7-kube-api-access-x6grq\") pod \"control-plane-machine-set-operator-78cbb6b69f-jhzxl\" (UID: \"cea58b47-da5e-4dc7-be23-19d8408318d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513643 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/628fd7f0-d4b6-4866-b7d4-6966ed698611-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513665 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84mc8\" (UniqueName: \"kubernetes.io/projected/c758606a-b3e4-494e-a2a6-7a7320277b37-kube-api-access-84mc8\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513714 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-certificates\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513780 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-proxy-tls\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513805 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kdnd\" (UniqueName: \"kubernetes.io/projected/91eb437c-beea-4f2d-b3f7-505b87fe6dee-kube-api-access-5kdnd\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513826 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513904 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd2r2\" (UniqueName: \"kubernetes.io/projected/ad67d365-7ef5-406c-9ffe-6f66253704c9-kube-api-access-gd2r2\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.513964 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514084 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-console-config\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514191 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-trusted-ca-bundle\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514216 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c758606a-b3e4-494e-a2a6-7a7320277b37-webhook-cert\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514323 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-signing-cabundle\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514347 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-stats-auth\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514425 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9t82\" (UniqueName: \"kubernetes.io/projected/296ae94a-36e6-480b-9395-8f6a96621fdf-kube-api-access-p9t82\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514484 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-service-ca\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514537 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514645 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91eb437c-beea-4f2d-b3f7-505b87fe6dee-secret-volume\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514672 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a506e2e-c940-4f10-b89c-948d10ba8902-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514755 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-metrics-certs\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.514813 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ce6cce5-c0bb-4d10-8458-bb9e15832a9c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-blr59\" (UID: \"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.515354 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.015342148 +0000 UTC m=+144.358270417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.591129 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.592076 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.615894 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.616425 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kdnd\" (UniqueName: \"kubernetes.io/projected/91eb437c-beea-4f2d-b3f7-505b87fe6dee-kube-api-access-5kdnd\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.617448 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.117425048 +0000 UTC m=+144.460353317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618277 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618349 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd2r2\" (UniqueName: \"kubernetes.io/projected/ad67d365-7ef5-406c-9ffe-6f66253704c9-kube-api-access-gd2r2\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618379 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618426 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-plugins-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618479 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-console-config\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618543 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8efc7eee-3b20-4cdf-9062-d64472b2c888-certs\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618577 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-trusted-ca-bundle\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618601 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c758606a-b3e4-494e-a2a6-7a7320277b37-webhook-cert\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618625 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f70daa4b-d685-406a-ba3a-7fa6d672acdd-cert\") pod \"ingress-canary-9kmt4\" (UID: \"f70daa4b-d685-406a-ba3a-7fa6d672acdd\") " pod="openshift-ingress-canary/ingress-canary-9kmt4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618757 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-stats-auth\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618799 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-signing-cabundle\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618850 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9t82\" (UniqueName: \"kubernetes.io/projected/296ae94a-36e6-480b-9395-8f6a96621fdf-kube-api-access-p9t82\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.618888 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-service-ca\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.619088 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.619175 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91eb437c-beea-4f2d-b3f7-505b87fe6dee-secret-volume\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.619228 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a506e2e-c940-4f10-b89c-948d10ba8902-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.619267 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-metrics-certs\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.619308 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-socket-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.619339 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ce6cce5-c0bb-4d10-8458-bb9e15832a9c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-blr59\" (UID: \"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.620630 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-trusted-ca\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.620665 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/296ae94a-36e6-480b-9395-8f6a96621fdf-service-ca-bundle\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.620687 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-csi-data-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.620715 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhp9d\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-kube-api-access-vhp9d\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.620783 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-serving-cert\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.620817 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-config\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.620947 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1679c4a6-a707-4150-825b-5cb8b90cb27c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.620975 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2zw\" (UniqueName: \"kubernetes.io/projected/f70daa4b-d685-406a-ba3a-7fa6d672acdd-kube-api-access-lx2zw\") pod \"ingress-canary-9kmt4\" (UID: \"f70daa4b-d685-406a-ba3a-7fa6d672acdd\") " pod="openshift-ingress-canary/ingress-canary-9kmt4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.621055 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvpk\" (UniqueName: \"kubernetes.io/projected/6d52104b-91e7-4a3a-9138-163eb850485d-kube-api-access-grvpk\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.621189 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njj8f\" (UniqueName: \"kubernetes.io/projected/628fd7f0-d4b6-4866-b7d4-6966ed698611-kube-api-access-njj8f\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.621379 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-serving-cert\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.621510 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a506e2e-c940-4f10-b89c-948d10ba8902-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.621567 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4km46\" (UniqueName: \"kubernetes.io/projected/8efc7eee-3b20-4cdf-9062-d64472b2c888-kube-api-access-4km46\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.623100 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-signing-cabundle\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.624323 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/296ae94a-36e6-480b-9395-8f6a96621fdf-service-ca-bundle\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.624416 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.624833 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf64r\" (UniqueName: \"kubernetes.io/projected/83427963-071f-40a0-8988-b39a3d41e59f-kube-api-access-jf64r\") pod \"migrator-59844c95c7-gtjx7\" (UID: \"83427963-071f-40a0-8988-b39a3d41e59f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.624899 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ad67d365-7ef5-406c-9ffe-6f66253704c9-profile-collector-cert\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625086 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cea58b47-da5e-4dc7-be23-19d8408318d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jhzxl\" (UID: \"cea58b47-da5e-4dc7-be23-19d8408318d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625142 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625255 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-bound-sa-token\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625347 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc9lz\" (UniqueName: \"kubernetes.io/projected/1ce6cce5-c0bb-4d10-8458-bb9e15832a9c-kube-api-access-vc9lz\") pod \"package-server-manager-789f6589d5-blr59\" (UID: \"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625375 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a506e2e-c940-4f10-b89c-948d10ba8902-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625448 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/628fd7f0-d4b6-4866-b7d4-6966ed698611-proxy-tls\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.625497 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.125481512 +0000 UTC m=+144.468409831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625543 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8efc7eee-3b20-4cdf-9062-d64472b2c888-node-bootstrap-token\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625592 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxf8g\" (UniqueName: \"kubernetes.io/projected/1a506e2e-c940-4f10-b89c-948d10ba8902-kube-api-access-qxf8g\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625629 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1679c4a6-a707-4150-825b-5cb8b90cb27c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.625734 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-tls\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.626396 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-trusted-ca\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.626584 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4cd3f585-c95f-43ee-962c-ea33aff90415-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.626951 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-console-config\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.627223 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-config\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.627418 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rsrv\" (UniqueName: \"kubernetes.io/projected/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-kube-api-access-7rsrv\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.627865 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvptj\" (UniqueName: \"kubernetes.io/projected/f799cad1-5a28-4af5-8070-5c365cddbf78-kube-api-access-jvptj\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.627990 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-mountpoint-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629142 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ad67d365-7ef5-406c-9ffe-6f66253704c9-srv-cert\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629167 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c758606a-b3e4-494e-a2a6-7a7320277b37-tmpfs\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629183 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-config\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629214 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91eb437c-beea-4f2d-b3f7-505b87fe6dee-config-volume\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629231 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-oauth-serving-cert\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629369 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjkns\" (UniqueName: \"kubernetes.io/projected/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-kube-api-access-fjkns\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629416 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-registration-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629441 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/628fd7f0-d4b6-4866-b7d4-6966ed698611-images\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629463 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c758606a-b3e4-494e-a2a6-7a7320277b37-apiservice-cert\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629517 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4cd3f585-c95f-43ee-962c-ea33aff90415-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629534 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f799cad1-5a28-4af5-8070-5c365cddbf78-config-volume\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629564 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a506e2e-c940-4f10-b89c-948d10ba8902-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629589 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1679c4a6-a707-4150-825b-5cb8b90cb27c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629801 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r66m8\" (UniqueName: \"kubernetes.io/projected/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-kube-api-access-r66m8\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629821 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87r8v\" (UniqueName: \"kubernetes.io/projected/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-kube-api-access-87r8v\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629843 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f799cad1-5a28-4af5-8070-5c365cddbf78-metrics-tls\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629934 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-signing-key\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629961 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-oauth-config\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.629980 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-default-certificate\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.630011 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6grq\" (UniqueName: \"kubernetes.io/projected/cea58b47-da5e-4dc7-be23-19d8408318d7-kube-api-access-x6grq\") pod \"control-plane-machine-set-operator-78cbb6b69f-jhzxl\" (UID: \"cea58b47-da5e-4dc7-be23-19d8408318d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.630040 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/628fd7f0-d4b6-4866-b7d4-6966ed698611-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.630652 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-service-ca\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.631251 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-trusted-ca-bundle\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.631915 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1679c4a6-a707-4150-825b-5cb8b90cb27c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.632202 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c758606a-b3e4-494e-a2a6-7a7320277b37-webhook-cert\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.628366 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-serving-cert\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.632637 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.633222 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-config\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.633406 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-stats-auth\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.633844 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-oauth-serving-cert\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.634632 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c758606a-b3e4-494e-a2a6-7a7320277b37-tmpfs\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.634636 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91eb437c-beea-4f2d-b3f7-505b87fe6dee-config-volume\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.636270 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4cd3f585-c95f-43ee-962c-ea33aff90415-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.636824 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1679c4a6-a707-4150-825b-5cb8b90cb27c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.637135 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/628fd7f0-d4b6-4866-b7d4-6966ed698611-images\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.639628 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-serving-cert\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.640306 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ad67d365-7ef5-406c-9ffe-6f66253704c9-srv-cert\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.640541 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/628fd7f0-d4b6-4866-b7d4-6966ed698611-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.640661 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-default-certificate\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.640681 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84mc8\" (UniqueName: \"kubernetes.io/projected/c758606a-b3e4-494e-a2a6-7a7320277b37-kube-api-access-84mc8\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.641709 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-signing-key\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.641719 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-certificates\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.641981 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-proxy-tls\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.649958 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-oauth-config\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.650212 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a506e2e-c940-4f10-b89c-948d10ba8902-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.650655 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4cd3f585-c95f-43ee-962c-ea33aff90415-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.652491 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-tls\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.652691 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-certificates\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.653662 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wsdcq\" (UID: \"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.654142 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ad67d365-7ef5-406c-9ffe-6f66253704c9-profile-collector-cert\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.654200 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c758606a-b3e4-494e-a2a6-7a7320277b37-apiservice-cert\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.654886 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91eb437c-beea-4f2d-b3f7-505b87fe6dee-secret-volume\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.655044 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cea58b47-da5e-4dc7-be23-19d8408318d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jhzxl\" (UID: \"cea58b47-da5e-4dc7-be23-19d8408318d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.657126 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-proxy-tls\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.667481 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/296ae94a-36e6-480b-9395-8f6a96621fdf-metrics-certs\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.668410 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ce6cce5-c0bb-4d10-8458-bb9e15832a9c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-blr59\" (UID: \"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.673385 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kdnd\" (UniqueName: \"kubernetes.io/projected/91eb437c-beea-4f2d-b3f7-505b87fe6dee-kube-api-access-5kdnd\") pod \"collect-profiles-29522280-c8fps\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.681970 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/628fd7f0-d4b6-4866-b7d4-6966ed698611-proxy-tls\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.689468 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd2r2\" (UniqueName: \"kubernetes.io/projected/ad67d365-7ef5-406c-9ffe-6f66253704c9-kube-api-access-gd2r2\") pod \"catalog-operator-68c6474976-wgzvh\" (UID: \"ad67d365-7ef5-406c-9ffe-6f66253704c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.757220 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.757362 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.757560 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.257531728 +0000 UTC m=+144.600459997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.757657 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-plugins-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.757767 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8efc7eee-3b20-4cdf-9062-d64472b2c888-certs\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.757883 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f70daa4b-d685-406a-ba3a-7fa6d672acdd-cert\") pod \"ingress-canary-9kmt4\" (UID: \"f70daa4b-d685-406a-ba3a-7fa6d672acdd\") " pod="openshift-ingress-canary/ingress-canary-9kmt4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.757940 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-socket-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.757976 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-csi-data-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758022 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2zw\" (UniqueName: \"kubernetes.io/projected/f70daa4b-d685-406a-ba3a-7fa6d672acdd-kube-api-access-lx2zw\") pod \"ingress-canary-9kmt4\" (UID: \"f70daa4b-d685-406a-ba3a-7fa6d672acdd\") " pod="openshift-ingress-canary/ingress-canary-9kmt4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758065 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4km46\" (UniqueName: \"kubernetes.io/projected/8efc7eee-3b20-4cdf-9062-d64472b2c888-kube-api-access-4km46\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758121 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758153 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8efc7eee-3b20-4cdf-9062-d64472b2c888-node-bootstrap-token\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758242 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvptj\" (UniqueName: \"kubernetes.io/projected/f799cad1-5a28-4af5-8070-5c365cddbf78-kube-api-access-jvptj\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758275 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-mountpoint-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758314 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-registration-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758335 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f799cad1-5a28-4af5-8070-5c365cddbf78-config-volume\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758427 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87r8v\" (UniqueName: \"kubernetes.io/projected/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-kube-api-access-87r8v\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758450 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f799cad1-5a28-4af5-8070-5c365cddbf78-metrics-tls\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758498 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgf7\" (UniqueName: \"kubernetes.io/projected/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-kube-api-access-rfgf7\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.758997 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-plugins-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.759007 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.759140 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a506e2e-c940-4f10-b89c-948d10ba8902-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.759217 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-mountpoint-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.759953 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.259932982 +0000 UTC m=+144.602861251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.760542 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f799cad1-5a28-4af5-8070-5c365cddbf78-config-volume\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.760636 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-registration-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.760840 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-socket-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.764655 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-csi-data-dir\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.764756 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgf7\" (UniqueName: \"kubernetes.io/projected/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-kube-api-access-rfgf7\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.765707 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8efc7eee-3b20-4cdf-9062-d64472b2c888-certs\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.767314 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8efc7eee-3b20-4cdf-9062-d64472b2c888-node-bootstrap-token\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.790196 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f799cad1-5a28-4af5-8070-5c365cddbf78-metrics-tls\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.790739 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f70daa4b-d685-406a-ba3a-7fa6d672acdd-cert\") pod \"ingress-canary-9kmt4\" (UID: \"f70daa4b-d685-406a-ba3a-7fa6d672acdd\") " pod="openshift-ingress-canary/ingress-canary-9kmt4" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.795182 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khbdr"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.797072 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9t82\" (UniqueName: \"kubernetes.io/projected/296ae94a-36e6-480b-9395-8f6a96621fdf-kube-api-access-p9t82\") pod \"router-default-5444994796-fqzrl\" (UID: \"296ae94a-36e6-480b-9395-8f6a96621fdf\") " pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.798398 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhp9d\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-kube-api-access-vhp9d\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.799995 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvpk\" (UniqueName: \"kubernetes.io/projected/6d52104b-91e7-4a3a-9138-163eb850485d-kube-api-access-grvpk\") pod \"console-f9d7485db-6zspj\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.811795 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7klmp"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.813532 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-bound-sa-token\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.822780 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.823438 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5l6x4"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.829892 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.839385 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc9lz\" (UniqueName: \"kubernetes.io/projected/1ce6cce5-c0bb-4d10-8458-bb9e15832a9c-kube-api-access-vc9lz\") pod \"package-server-manager-789f6589d5-blr59\" (UID: \"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.852050 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.853478 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njj8f\" (UniqueName: \"kubernetes.io/projected/628fd7f0-d4b6-4866-b7d4-6966ed698611-kube-api-access-njj8f\") pod \"machine-config-operator-74547568cd-qc9j6\" (UID: \"628fd7f0-d4b6-4866-b7d4-6966ed698611\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.861686 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.862275 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.362251788 +0000 UTC m=+144.705180057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.864137 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.864515 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf64r\" (UniqueName: \"kubernetes.io/projected/83427963-071f-40a0-8988-b39a3d41e59f-kube-api-access-jf64r\") pod \"migrator-59844c95c7-gtjx7\" (UID: \"83427963-071f-40a0-8988-b39a3d41e59f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.870128 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8sd2q"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.870813 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.889406 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ch9j6"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.892558 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.897440 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1679c4a6-a707-4150-825b-5cb8b90cb27c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2xtf8\" (UID: \"1679c4a6-a707-4150-825b-5cb8b90cb27c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.898193 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw"] Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.900280 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.907094 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxf8g\" (UniqueName: \"kubernetes.io/projected/1a506e2e-c940-4f10-b89c-948d10ba8902-kube-api-access-qxf8g\") pod \"cluster-image-registry-operator-dc59b4c8b-4bfcw\" (UID: \"1a506e2e-c940-4f10-b89c-948d10ba8902\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.916926 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.923885 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rsrv\" (UniqueName: \"kubernetes.io/projected/56a1d7ef-ccae-4b8e-b94f-edee0ce6e902-kube-api-access-7rsrv\") pod \"machine-config-controller-84d6567774-l5kfm\" (UID: \"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.926084 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" Feb 17 14:08:37 crc kubenswrapper[4836]: W0217 14:08:37.939203 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19216a1e_34af_4764_a621_e5097db4751b.slice/crio-c732fed9106b6ad88780358d79674f3866d5bc1bee0d51410d5390775fcf4cbc WatchSource:0}: Error finding container c732fed9106b6ad88780358d79674f3866d5bc1bee0d51410d5390775fcf4cbc: Status 404 returned error can't find the container with id c732fed9106b6ad88780358d79674f3866d5bc1bee0d51410d5390775fcf4cbc Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.959031 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjkns\" (UniqueName: \"kubernetes.io/projected/b8fcc519-bc1d-4a7e-8005-cb29f435f4e5-kube-api-access-fjkns\") pod \"service-ca-operator-777779d784-ld8ls\" (UID: \"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.963127 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:37 crc kubenswrapper[4836]: E0217 14:08:37.963588 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.463568428 +0000 UTC m=+144.806496757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:37 crc kubenswrapper[4836]: I0217 14:08:37.975105 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r66m8\" (UniqueName: \"kubernetes.io/projected/0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00-kube-api-access-r66m8\") pod \"service-ca-9c57cc56f-72n7k\" (UID: \"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00\") " pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.011852 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84mc8\" (UniqueName: \"kubernetes.io/projected/c758606a-b3e4-494e-a2a6-7a7320277b37-kube-api-access-84mc8\") pod \"packageserver-d55dfcdfc-x2x76\" (UID: \"c758606a-b3e4-494e-a2a6-7a7320277b37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.019962 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6grq\" (UniqueName: \"kubernetes.io/projected/cea58b47-da5e-4dc7-be23-19d8408318d7-kube-api-access-x6grq\") pod \"control-plane-machine-set-operator-78cbb6b69f-jhzxl\" (UID: \"cea58b47-da5e-4dc7-be23-19d8408318d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.057966 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4km46\" (UniqueName: \"kubernetes.io/projected/8efc7eee-3b20-4cdf-9062-d64472b2c888-kube-api-access-4km46\") pod \"machine-config-server-8ngwr\" (UID: \"8efc7eee-3b20-4cdf-9062-d64472b2c888\") " pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.064806 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.065241 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.565221657 +0000 UTC m=+144.908149926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.066515 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.070922 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvptj\" (UniqueName: \"kubernetes.io/projected/f799cad1-5a28-4af5-8070-5c365cddbf78-kube-api-access-jvptj\") pod \"dns-default-284hg\" (UID: \"f799cad1-5a28-4af5-8070-5c365cddbf78\") " pod="openshift-dns/dns-default-284hg" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.094501 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87r8v\" (UniqueName: \"kubernetes.io/projected/c2f4f6fb-d604-402f-83d0-6b25781c3aa8-kube-api-access-87r8v\") pod \"csi-hostpathplugin-6scjm\" (UID: \"c2f4f6fb-d604-402f-83d0-6b25781c3aa8\") " pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.105128 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2zw\" (UniqueName: \"kubernetes.io/projected/f70daa4b-d685-406a-ba3a-7fa6d672acdd-kube-api-access-lx2zw\") pod \"ingress-canary-9kmt4\" (UID: \"f70daa4b-d685-406a-ba3a-7fa6d672acdd\") " pod="openshift-ingress-canary/ingress-canary-9kmt4" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.119639 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.133235 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.134420 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.167557 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.168205 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.66817272 +0000 UTC m=+145.011101149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.177498 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.211777 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.250209 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.258032 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.258914 4836 csr.go:261] certificate signing request csr-c4bdx is approved, waiting to be issued Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.271774 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-284hg" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.272251 4836 csr.go:257] certificate signing request csr-c4bdx is issued Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.272684 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.273005 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.273068 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.273102 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.273131 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.273164 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.273211 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.273238 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.273264 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.274001 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-etcd-serving-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.282108 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-config\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.283126 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.283470 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.283554 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.783532414 +0000 UTC m=+145.126460683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.283939 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66402e53-3287-45c4-bceb-78fc99836c5b-image-import-ca\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.284252 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-images\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.293775 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66402e53-3287-45c4-bceb-78fc99836c5b-encryption-config\") pod \"apiserver-76f77b778f-cnq25\" (UID: \"66402e53-3287-45c4-bceb-78fc99836c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.294325 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ecc7c98-e9a3-4850-a741-7e0bcf670e27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jjmwc\" (UID: \"1ecc7c98-e9a3-4850-a741-7e0bcf670e27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.297151 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6scjm" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.317354 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9kmt4" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.317372 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8ngwr" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.382251 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.382687 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.882671675 +0000 UTC m=+145.225599944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.458152 4836 generic.go:334] "Generic (PLEG): container finished" podID="c26f912f-f640-4b4c-ab61-dd2a163f12ab" containerID="ff70bd1f19cb66489eb8021670929dc34b2836747cd3559cb1b40b76e9b0db37" exitCode=0 Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.458235 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" event={"ID":"c26f912f-f640-4b4c-ab61-dd2a163f12ab","Type":"ContainerDied","Data":"ff70bd1f19cb66489eb8021670929dc34b2836747cd3559cb1b40b76e9b0db37"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.458264 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" event={"ID":"c26f912f-f640-4b4c-ab61-dd2a163f12ab","Type":"ContainerStarted","Data":"3189bd3a0db29daf1f670c5fde8aa9c551657011478702a0b61c6a161156f464"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.466427 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kcm8s" event={"ID":"2840702b-d22f-4184-bada-4cd337d79407","Type":"ContainerStarted","Data":"4be7a02a68429fd3aca6fe66c5f908cf70143baa5ed0dc0dc81c5956e39f2350"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.467667 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.471800 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" event={"ID":"331c189b-0cb5-4733-a233-894429c709a9","Type":"ContainerStarted","Data":"6d996fae9b23ebe40130923ec7457c18e45a41798247aa59f7da602cb14a7891"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.471830 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" event={"ID":"331c189b-0cb5-4733-a233-894429c709a9","Type":"ContainerStarted","Data":"00ae5e23519b93a235c719050c0aaaa27453e0ec9d39273f3fd74363f96401c4"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.472695 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.475840 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" event={"ID":"8c77bcf1-4025-4c35-9580-41e9a61195e8","Type":"ContainerStarted","Data":"48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.475874 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" event={"ID":"8c77bcf1-4025-4c35-9580-41e9a61195e8","Type":"ContainerStarted","Data":"b99d73db17eb9c6b2aa85ca03f0903902f643a2f2fbc708d9b4c51f4e9d1ede7"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.476602 4836 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sknds container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.476641 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" podUID="331c189b-0cb5-4733-a233-894429c709a9" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.476681 4836 patch_prober.go:28] interesting pod/console-operator-58897d9998-kcm8s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.476776 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kcm8s" podUID="2840702b-d22f-4184-bada-4cd337d79407" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.476855 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.478141 4836 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5l6x4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.478237 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" podUID="8c77bcf1-4025-4c35-9580-41e9a61195e8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.479580 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" event={"ID":"3bb5e0b8-9179-4570-a3d8-acaa80b2c884","Type":"ContainerStarted","Data":"f4657696ca94735e533f18cf33a4d84536d0fa2d79311a5bbd8096d04f6aa8de"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.479624 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" event={"ID":"3bb5e0b8-9179-4570-a3d8-acaa80b2c884","Type":"ContainerStarted","Data":"fbc7ee12dfd6b918ba68fe183a564ff72718b972a1838e5e90efb3b9a4d21428"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.493018 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" event={"ID":"9a3f2789-dd41-4e95-8174-db3a40098b0e","Type":"ContainerStarted","Data":"8b955158897397fb5f9170923b488df06517409802fadf21a2572eed8b5bec46"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.494448 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.494840 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.994798322 +0000 UTC m=+145.337726591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.495162 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.496257 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" event={"ID":"5ad14aa6-962d-4f8f-babe-745f65d63560","Type":"ContainerStarted","Data":"7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.496975 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.498474 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:38.99845604 +0000 UTC m=+145.341384309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.512486 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" event={"ID":"65684d1d-5242-464d-8caf-ad4866bf6a86","Type":"ContainerStarted","Data":"8b2041a0bc951b67664b08a48ffa757dc83671e656c0a06eb067f557cc4c5a47"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.514263 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" event={"ID":"985bc83c-52fa-45dc-ab4f-6e47ee47683e","Type":"ContainerStarted","Data":"4b1cfa0180186477ad01885b0380528a2ed9a9e38e3b90ab0219a2e26e3de881"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.514346 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" event={"ID":"985bc83c-52fa-45dc-ab4f-6e47ee47683e","Type":"ContainerStarted","Data":"7d0ca8f5e10670b96b45ab236df0ffcb5b0c0577a99d998beb3a30327978aa5e"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.514703 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.520568 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" event={"ID":"171e2af0-2993-4cd3-942f-043bccca2813","Type":"ContainerStarted","Data":"ce2afad92b1d3e7f3446573cfc68acc3678a98ba8dd05c52e22c8acd6f8e122e"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.521182 4836 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2mmw4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.521225 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" podUID="5ad14aa6-962d-4f8f-babe-745f65d63560" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.524439 4836 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-khbdr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.524480 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" podUID="985bc83c-52fa-45dc-ab4f-6e47ee47683e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.568727 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.575119 4836 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6rsds container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" start-of-body= Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.575176 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" podUID="c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.596742 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.596760 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.602772 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.102719948 +0000 UTC m=+145.445648217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.605512 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.632314 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.119814752 +0000 UTC m=+145.462743021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.639863 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" event={"ID":"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7","Type":"ContainerStarted","Data":"374ae013639e0d9afa8e234c5feaec0812c5a1b8b7085c57cb72bf432395a8d0"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.639894 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" event={"ID":"18ec8466-f311-4f81-ae38-48635b000ced","Type":"ContainerStarted","Data":"648849123f32076d0d03d9d2a494fbc3dfd2c9e02da687a37fcb474c75f57692"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.639908 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.639917 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" event={"ID":"19216a1e-34af-4764-a621-e5097db4751b","Type":"ContainerStarted","Data":"c732fed9106b6ad88780358d79674f3866d5bc1bee0d51410d5390775fcf4cbc"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.639926 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" event={"ID":"1d30a99f-a727-4eb4-9a32-0508707384bf","Type":"ContainerStarted","Data":"a735d305d37e372b5eab7f1f96601405978fd0c74ac2a2f77b58f60958a90ac5"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.639935 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" event={"ID":"1d30a99f-a727-4eb4-9a32-0508707384bf","Type":"ContainerStarted","Data":"7291ebb14322333ea30d034cdbbbe26e6099c047841086ef774fa19acbdc1145"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.639944 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" event={"ID":"95872171-94c1-4b8a-935f-ae180a4e3d11","Type":"ContainerStarted","Data":"c7d64b67b1393749363c0e40b1d7a250a383bf75888f287bcd7649be5c519540"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.639956 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59"] Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.648431 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" event={"ID":"921ecdc3-b5f3-44e4-9300-d25342d944d8","Type":"ContainerStarted","Data":"1cee4451298f59a86bd26b4891652bce600348c354341a200fb5f066413bf22b"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.679602 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" event={"ID":"f8080d32-cbe7-4b02-8791-d9f1f9aca269","Type":"ContainerStarted","Data":"7157c1b2b508140b34c05778d3e34a29b1d928f04cfa7d5b8e263650474961e2"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.717849 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" event={"ID":"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40","Type":"ContainerStarted","Data":"e8c4fbe0705bb8961a7928968738bd6e008c9707dd0c001e1e2eaef63fd0999c"} Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.722609 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.727492 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.2274561 +0000 UTC m=+145.570384379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.729557 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.735779 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.735851 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.736349 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.236330125 +0000 UTC m=+145.579258394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.749534 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh"] Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.827353 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq"] Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.831356 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.831976 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.331962614 +0000 UTC m=+145.674890883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.943336 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:38 crc kubenswrapper[4836]: E0217 14:08:38.943891 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.443876526 +0000 UTC m=+145.786804805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:38 crc kubenswrapper[4836]: I0217 14:08:38.997985 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" podStartSLOduration=122.997963832 podStartE2EDuration="2m2.997963832s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:38.976210584 +0000 UTC m=+145.319138853" watchObservedRunningTime="2026-02-17 14:08:38.997963832 +0000 UTC m=+145.340892111" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.061659 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.062104 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.562083724 +0000 UTC m=+145.905011993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.067383 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" podStartSLOduration=123.067347383 podStartE2EDuration="2m3.067347383s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.030883135 +0000 UTC m=+145.373811424" watchObservedRunningTime="2026-02-17 14:08:39.067347383 +0000 UTC m=+145.410275652" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.115775 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rq6q9" podStartSLOduration=124.115754309 podStartE2EDuration="2m4.115754309s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.103791641 +0000 UTC m=+145.446719920" watchObservedRunningTime="2026-02-17 14:08:39.115754309 +0000 UTC m=+145.458682578" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.119846 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6"] Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.181364 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.181953 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.681927996 +0000 UTC m=+146.024856275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.207157 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" podStartSLOduration=124.207136695 podStartE2EDuration="2m4.207136695s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.189426886 +0000 UTC m=+145.532355165" watchObservedRunningTime="2026-02-17 14:08:39.207136695 +0000 UTC m=+145.550064964" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.209610 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6zspj"] Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.274397 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 14:03:38 +0000 UTC, rotation deadline is 2026-11-17 03:14:03.55217016 +0000 UTC Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.274476 4836 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6541h5m24.277697467s for next certificate rotation Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.289149 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.289557 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.789538833 +0000 UTC m=+146.132467102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.403540 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.408256 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:39.908210994 +0000 UTC m=+146.251139273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.413756 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" podStartSLOduration=123.413727731 podStartE2EDuration="2m3.413727731s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.333962773 +0000 UTC m=+145.676891052" watchObservedRunningTime="2026-02-17 14:08:39.413727731 +0000 UTC m=+145.756656000" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.443971 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps"] Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.459713 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wd65t" podStartSLOduration=124.459693841 podStartE2EDuration="2m4.459693841s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.458245023 +0000 UTC m=+145.801173312" watchObservedRunningTime="2026-02-17 14:08:39.459693841 +0000 UTC m=+145.802622110" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.506596 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.507093 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.007066888 +0000 UTC m=+146.349995157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.513096 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw"] Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.528371 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dsrq8" podStartSLOduration=124.528321974 podStartE2EDuration="2m4.528321974s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.51128756 +0000 UTC m=+145.854215839" watchObservedRunningTime="2026-02-17 14:08:39.528321974 +0000 UTC m=+145.871250243" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.546565 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cwlxz" podStartSLOduration=124.546537297 podStartE2EDuration="2m4.546537297s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.543576849 +0000 UTC m=+145.886505118" watchObservedRunningTime="2026-02-17 14:08:39.546537297 +0000 UTC m=+145.889465586" Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.608506 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.108491791 +0000 UTC m=+146.451420060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.608104 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:39 crc kubenswrapper[4836]: W0217 14:08:39.616059 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91eb437c_beea_4f2d_b3f7_505b87fe6dee.slice/crio-d146dcdb5314cbe00f41e751cbb49254cf23ab6e8cdc06caed3ce29aac7230d2 WatchSource:0}: Error finding container d146dcdb5314cbe00f41e751cbb49254cf23ab6e8cdc06caed3ce29aac7230d2: Status 404 returned error can't find the container with id d146dcdb5314cbe00f41e751cbb49254cf23ab6e8cdc06caed3ce29aac7230d2 Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.651823 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" podStartSLOduration=124.651804832 podStartE2EDuration="2m4.651804832s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.610178636 +0000 UTC m=+145.953106905" watchObservedRunningTime="2026-02-17 14:08:39.651804832 +0000 UTC m=+145.994733101" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.715083 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.715271 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.215246346 +0000 UTC m=+146.558174615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.715734 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.716036 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.216023907 +0000 UTC m=+146.558952176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.736985 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-kcm8s" podStartSLOduration=124.736962883 podStartE2EDuration="2m4.736962883s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.731162739 +0000 UTC m=+146.074091028" watchObservedRunningTime="2026-02-17 14:08:39.736962883 +0000 UTC m=+146.079891152" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.749252 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5cbbv" podStartSLOduration=124.749230338 podStartE2EDuration="2m4.749230338s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.745487629 +0000 UTC m=+146.088415898" watchObservedRunningTime="2026-02-17 14:08:39.749230338 +0000 UTC m=+146.092158617" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.780783 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-98frx" podStartSLOduration=124.780759796 podStartE2EDuration="2m4.780759796s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:39.778950948 +0000 UTC m=+146.121879237" watchObservedRunningTime="2026-02-17 14:08:39.780759796 +0000 UTC m=+146.123688065" Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.832062 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.832596 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.332567381 +0000 UTC m=+146.675495650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.833332 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.833990 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.333960258 +0000 UTC m=+146.676888707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.860267 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6zspj" event={"ID":"6d52104b-91e7-4a3a-9138-163eb850485d","Type":"ContainerStarted","Data":"291ff510753e6307affd77e72c2b113e622f07b799c9441e606ef5eb3889b1a8"} Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.943631 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:39 crc kubenswrapper[4836]: E0217 14:08:39.944678 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.444660207 +0000 UTC m=+146.787588476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.956866 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" event={"ID":"19216a1e-34af-4764-a621-e5097db4751b","Type":"ContainerStarted","Data":"6a0184fe57af3eace5e14316dfc3c005fa9a071572d7b984b02e9739363bc416"} Feb 17 14:08:39 crc kubenswrapper[4836]: I0217 14:08:39.983917 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9kmt4"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.001049 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" event={"ID":"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c","Type":"ContainerStarted","Data":"8a43311cfd525de309c9c9c8044357beee78fd6571a5dfcbbc4d511abab80b36"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.032738 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fqzrl" event={"ID":"296ae94a-36e6-480b-9395-8f6a96621fdf","Type":"ContainerStarted","Data":"c80137cc1b82e9221143cce7ce317f4b02d9faac0b0555753344e61db88522d2"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.035239 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" event={"ID":"1a506e2e-c940-4f10-b89c-948d10ba8902","Type":"ContainerStarted","Data":"6bc5f32428c6d60602cc27b761cbfd980b04d3bb8f94c56d7b88d34c6d56a1d1"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.038311 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8ngwr" event={"ID":"8efc7eee-3b20-4cdf-9062-d64472b2c888","Type":"ContainerStarted","Data":"28116c03b63599f37e6ada53240987953f887af6aa25d8c6d798ad25c7708a0f"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.045966 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.047091 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.547075626 +0000 UTC m=+146.890003895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.063869 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" event={"ID":"18ec8466-f311-4f81-ae38-48635b000ced","Type":"ContainerStarted","Data":"2a96effe68dcde9ae3d5061b948e7a8c5c380763dd8cf4c65d2d0978d91906d5"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.101494 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.104194 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" event={"ID":"628fd7f0-d4b6-4866-b7d4-6966ed698611","Type":"ContainerStarted","Data":"e0f2e827a3faadf00ce4c1bb29a3260e33bcd5ff37992adb5957692b8d9df39d"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.114552 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" event={"ID":"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb","Type":"ContainerStarted","Data":"062c2b9f95e0f8e02647d29a212805138a081d232a028b28cd3b646963fab553"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.120172 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nncbw" podStartSLOduration=125.120156336 podStartE2EDuration="2m5.120156336s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:40.119406817 +0000 UTC m=+146.462335096" watchObservedRunningTime="2026-02-17 14:08:40.120156336 +0000 UTC m=+146.463084605" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.131911 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" event={"ID":"3bb5e0b8-9179-4570-a3d8-acaa80b2c884","Type":"ContainerStarted","Data":"6e429a13f72b86b358a3b35792870f01ef0a4205e81bb53c3f33656e1241d64c"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.146520 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fqzrl" podStartSLOduration=125.146503556 podStartE2EDuration="2m5.146503556s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:40.145464178 +0000 UTC m=+146.488392467" watchObservedRunningTime="2026-02-17 14:08:40.146503556 +0000 UTC m=+146.489431825" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.147415 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.149078 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.649062364 +0000 UTC m=+146.991990633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.199350 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7htl2" podStartSLOduration=125.199328969 podStartE2EDuration="2m5.199328969s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:40.195153598 +0000 UTC m=+146.538081877" watchObservedRunningTime="2026-02-17 14:08:40.199328969 +0000 UTC m=+146.542257228" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.221143 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.272691 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.273327 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.773303783 +0000 UTC m=+147.116232062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.280127 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" event={"ID":"65684d1d-5242-464d-8caf-ad4866bf6a86","Type":"ContainerStarted","Data":"e133e0f2c1fff6df572a502162daf44645b3156b1cdfd9eeb6c1f6241f00aeea"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.312527 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" event={"ID":"921ecdc3-b5f3-44e4-9300-d25342d944d8","Type":"ContainerStarted","Data":"2b43faccef7aab4cdf7763f3ecfedc0f078099ff98d9886a922cb068c209a6fa"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.319910 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7klmp" podStartSLOduration=125.31988893 podStartE2EDuration="2m5.31988893s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:40.317155877 +0000 UTC m=+146.660084166" watchObservedRunningTime="2026-02-17 14:08:40.31988893 +0000 UTC m=+146.662817199" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.374556 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.374960 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.874941431 +0000 UTC m=+147.217869710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.391556 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" event={"ID":"ad67d365-7ef5-406c-9ffe-6f66253704c9","Type":"ContainerStarted","Data":"fa04f773a50c12114b3402cf501ef4d0760330312b6153b8ea03d1ab79f5a403"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.392093 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.406953 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" event={"ID":"91eb437c-beea-4f2d-b3f7-505b87fe6dee","Type":"ContainerStarted","Data":"d146dcdb5314cbe00f41e751cbb49254cf23ab6e8cdc06caed3ce29aac7230d2"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.421196 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-284hg"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.437017 4836 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wgzvh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.437086 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" podUID="ad67d365-7ef5-406c-9ffe-6f66253704c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.449653 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" podStartSLOduration=125.449637755 podStartE2EDuration="2m5.449637755s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:40.447845697 +0000 UTC m=+146.790773966" watchObservedRunningTime="2026-02-17 14:08:40.449637755 +0000 UTC m=+146.792566024" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.454911 4836 generic.go:334] "Generic (PLEG): container finished" podID="e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40" containerID="bf8d1eedc34d2af6d2e6de50ac54dac0113b8fb73bd5dae605e4e5b4b2165e46" exitCode=0 Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.455891 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" event={"ID":"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40","Type":"ContainerDied","Data":"bf8d1eedc34d2af6d2e6de50ac54dac0113b8fb73bd5dae605e4e5b4b2165e46"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.477676 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.481607 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:40.981459249 +0000 UTC m=+147.324387518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.484897 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jjmwc"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.504012 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.527133 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" event={"ID":"171e2af0-2993-4cd3-942f-043bccca2813","Type":"ContainerStarted","Data":"e7d432d1e079472b278684cf08df1245f9dc63f92ea016424fa5c0c99aa22306"} Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.530978 4836 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-khbdr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.531035 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" podUID="985bc83c-52fa-45dc-ab4f-6e47ee47683e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.532248 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6scjm"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.536553 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.539051 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.550636 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.557489 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.580256 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.581177 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.081155307 +0000 UTC m=+147.424083586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.587204 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sknds" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.587243 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-72n7k"] Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.635364 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-kcm8s" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.688247 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.719135 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.219117879 +0000 UTC m=+147.562046218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: W0217 14:08:40.765080 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc0c6ee_c7d3_4e99_a6c2_b8de8f7ffa00.slice/crio-7e91d1245e3a5f406f922429283277502660f1176e6a84a89b48e1271e943fc8 WatchSource:0}: Error finding container 7e91d1245e3a5f406f922429283277502660f1176e6a84a89b48e1271e943fc8: Status 404 returned error can't find the container with id 7e91d1245e3a5f406f922429283277502660f1176e6a84a89b48e1271e943fc8 Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.792436 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.793471 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.293452453 +0000 UTC m=+147.636380732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.906119 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:40 crc kubenswrapper[4836]: E0217 14:08:40.906572 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.406526616 +0000 UTC m=+147.749454885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.918263 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.972942 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 17 14:08:40 crc kubenswrapper[4836]: I0217 14:08:40.973004 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.009534 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.010192 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.510166887 +0000 UTC m=+147.853095156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.032685 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls"] Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.062066 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8"] Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.114986 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.115315 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.615301369 +0000 UTC m=+147.958229638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.138929 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cnq25"] Feb 17 14:08:41 crc kubenswrapper[4836]: W0217 14:08:41.155816 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1679c4a6_a707_4150_825b_5cb8b90cb27c.slice/crio-9f3428a83cd09a47ab4feb5e6bc8c24a7d13c5ec588cb5bb2c42e10da3ed95e7 WatchSource:0}: Error finding container 9f3428a83cd09a47ab4feb5e6bc8c24a7d13c5ec588cb5bb2c42e10da3ed95e7: Status 404 returned error can't find the container with id 9f3428a83cd09a47ab4feb5e6bc8c24a7d13c5ec588cb5bb2c42e10da3ed95e7 Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.217224 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.217763 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.717719508 +0000 UTC m=+148.060647767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.217945 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.218359 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.718335284 +0000 UTC m=+148.061263573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.319476 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.320165 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.820143457 +0000 UTC m=+148.163071726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.421442 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.421812 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:41.921801387 +0000 UTC m=+148.264729656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.526033 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.526557 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.026539168 +0000 UTC m=+148.369467447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.628092 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.628565 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.128551906 +0000 UTC m=+148.471480175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.636855 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" event={"ID":"1ecc7c98-e9a3-4850-a741-7e0bcf670e27","Type":"ContainerStarted","Data":"fecefbcabce7d581460da721089e3aaf00a80d8b8481ab7813bd28b6b56e33f4"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.636942 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" event={"ID":"1ecc7c98-e9a3-4850-a741-7e0bcf670e27","Type":"ContainerStarted","Data":"abe8216f04612f75edf41ec18a2f6650cca5fd3f31ec4360b8c966e6edcb969b"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.678075 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6zspj" event={"ID":"6d52104b-91e7-4a3a-9138-163eb850485d","Type":"ContainerStarted","Data":"f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.736229 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.737108 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.237086127 +0000 UTC m=+148.580014396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.743098 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" event={"ID":"91eb437c-beea-4f2d-b3f7-505b87fe6dee","Type":"ContainerStarted","Data":"4b8580f44aade0425b4de34e0f49d07bd6192e526f9c10aa11b53556a3546660"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.757326 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" event={"ID":"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902","Type":"ContainerStarted","Data":"a71f5b681c1cf8c3e0036b27104d45355d9eed9eb0c4dbd3b2ed191daafeaa1e"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.757422 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" event={"ID":"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902","Type":"ContainerStarted","Data":"12d23fb1c3176bf201275d87f8aadc303da6853f0b3cacde10cb3ec5cfffcc32"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.787683 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6zspj" podStartSLOduration=126.78766003 podStartE2EDuration="2m6.78766003s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:41.73721572 +0000 UTC m=+148.080143999" watchObservedRunningTime="2026-02-17 14:08:41.78766003 +0000 UTC m=+148.130588329" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.790244 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" podStartSLOduration=126.790233368 podStartE2EDuration="2m6.790233368s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:41.785908504 +0000 UTC m=+148.128836793" watchObservedRunningTime="2026-02-17 14:08:41.790233368 +0000 UTC m=+148.133161637" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.803131 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" event={"ID":"c26f912f-f640-4b4c-ab61-dd2a163f12ab","Type":"ContainerStarted","Data":"339b233a0392cc1ae25e160cbae69bfbfda6b94a68d2c6fec86051674706c3bd"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.804087 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.818263 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" event={"ID":"66402e53-3287-45c4-bceb-78fc99836c5b","Type":"ContainerStarted","Data":"ecaf3ef8d304449130aaa81d7ce6f0ebfc459031923106bdf43b8d4e6645a320"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.828353 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" event={"ID":"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c","Type":"ContainerStarted","Data":"2ba11af3b63fe390525fa68100541d0617a3c07adef0279400f0bc4e690218ff"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.828401 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" event={"ID":"1ce6cce5-c0bb-4d10-8458-bb9e15832a9c","Type":"ContainerStarted","Data":"b4b8b7a1f292811ffe1d6ad1d5f62a28882d64527b4d531bfcdee88772ad6c9d"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.829087 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.841666 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.841932 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.34191996 +0000 UTC m=+148.684848229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.845361 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" event={"ID":"1a506e2e-c940-4f10-b89c-948d10ba8902","Type":"ContainerStarted","Data":"3fecb315f3247083dd673ec8f96e4d094038e9776b4dc89179ed6b9b7343abe9"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.862403 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" podStartSLOduration=126.862362234 podStartE2EDuration="2m6.862362234s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:41.860573136 +0000 UTC m=+148.203501415" watchObservedRunningTime="2026-02-17 14:08:41.862362234 +0000 UTC m=+148.205290513" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.868732 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" event={"ID":"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00","Type":"ContainerStarted","Data":"7e91d1245e3a5f406f922429283277502660f1176e6a84a89b48e1271e943fc8"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.875982 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.878746 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" event={"ID":"19216a1e-34af-4764-a621-e5097db4751b","Type":"ContainerStarted","Data":"541952d69cab6c11b3fc7e559178910c58e24409f9e10fa515210c2d7678eafe"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.898430 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" event={"ID":"171e2af0-2993-4cd3-942f-043bccca2813","Type":"ContainerStarted","Data":"e18399186af073660f549c96051d9c998265a5ce1b626eb8518999225e1d2674"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.934048 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:41 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:41 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:41 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.934100 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.938147 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" event={"ID":"759ba4ec-c9e0-43a5-9f05-5a05bde7c3fb","Type":"ContainerStarted","Data":"1896c6ebfdf62fb56a8dfec0d91294e09cab7f8d55a164ff613d06b7c1ca705d"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.942218 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.945122 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6scjm" event={"ID":"c2f4f6fb-d604-402f-83d0-6b25781c3aa8","Type":"ContainerStarted","Data":"e1b370fccdd3c0cbe1bd48b096eb15a417e6cde60a9f9fca5ca899cf49f77f6f"} Feb 17 14:08:41 crc kubenswrapper[4836]: E0217 14:08:41.945539 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.445518481 +0000 UTC m=+148.788446920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.951585 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" podStartSLOduration=125.951560751 podStartE2EDuration="2m5.951560751s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:41.944091394 +0000 UTC m=+148.287019673" watchObservedRunningTime="2026-02-17 14:08:41.951560751 +0000 UTC m=+148.294489020" Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.966080 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" event={"ID":"628fd7f0-d4b6-4866-b7d4-6966ed698611","Type":"ContainerStarted","Data":"6931602701dbddf7d3aef736b234ff4b522e3851fded6f8218a926b7a1174ab2"} Feb 17 14:08:41 crc kubenswrapper[4836]: I0217 14:08:41.966161 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" event={"ID":"628fd7f0-d4b6-4866-b7d4-6966ed698611","Type":"ContainerStarted","Data":"e1eeb465451bc50cb039cba588fd31be184af00e413db8d626ef59e16043bdfa"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.000321 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4bfcw" podStartSLOduration=127.000274135 podStartE2EDuration="2m7.000274135s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:41.998270522 +0000 UTC m=+148.341198811" watchObservedRunningTime="2026-02-17 14:08:42.000274135 +0000 UTC m=+148.343202404" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.022395 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" event={"ID":"ad67d365-7ef5-406c-9ffe-6f66253704c9","Type":"ContainerStarted","Data":"388642265f4db9d0735cce02485d442992568c7fd46001c7bc78254a567dbaca"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.027437 4836 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wgzvh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.027494 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" podUID="ad67d365-7ef5-406c-9ffe-6f66253704c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.049428 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.054143 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.554124645 +0000 UTC m=+148.897053104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.115088 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" event={"ID":"c758606a-b3e4-494e-a2a6-7a7320277b37","Type":"ContainerStarted","Data":"409c23bf268949259e227a8a90fb2343eddc04f3a9768ef7989e55923aeb9dfc"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.115143 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" event={"ID":"c758606a-b3e4-494e-a2a6-7a7320277b37","Type":"ContainerStarted","Data":"228fee33c87b0b716de72ec5eddd835617b8a8b71e05b8b117fbe54c01548e3a"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.116237 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.117928 4836 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x2x76 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.117967 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" podUID="c758606a-b3e4-494e-a2a6-7a7320277b37" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.120507 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" event={"ID":"921ecdc3-b5f3-44e4-9300-d25342d944d8","Type":"ContainerStarted","Data":"3c505489717a23a07a29188af5920d31395e6fa5597950a0715ef893694d8c29"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.143810 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" event={"ID":"cea58b47-da5e-4dc7-be23-19d8408318d7","Type":"ContainerStarted","Data":"93c66f80e4f7a852f9530feeb27cdd8fe450ee2f132d821836ade2d0abda189b"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.143883 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" event={"ID":"cea58b47-da5e-4dc7-be23-19d8408318d7","Type":"ContainerStarted","Data":"d0b0ff9c3637aff4d5ccd9c5d218aa1b169a1c8970c688131c8c515a8d4e2cf7"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.152045 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.154002 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.653960536 +0000 UTC m=+148.996888985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.178608 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" event={"ID":"1679c4a6-a707-4150-825b-5cb8b90cb27c","Type":"ContainerStarted","Data":"9f3428a83cd09a47ab4feb5e6bc8c24a7d13c5ec588cb5bb2c42e10da3ed95e7"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.265880 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.271126 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.771103316 +0000 UTC m=+149.114031585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.277802 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ch9j6" podStartSLOduration=127.277784443 podStartE2EDuration="2m7.277784443s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:42.22870399 +0000 UTC m=+148.571632279" watchObservedRunningTime="2026-02-17 14:08:42.277784443 +0000 UTC m=+148.620712712" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.360872 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fqzrl" event={"ID":"296ae94a-36e6-480b-9395-8f6a96621fdf","Type":"ContainerStarted","Data":"d0f3be10b979ed9ddc992a293ce660fd9546585be5a409a195c0c6066553e222"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.368771 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.369690 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.869670343 +0000 UTC m=+149.212598612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.382670 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8ngwr" event={"ID":"8efc7eee-3b20-4cdf-9062-d64472b2c888","Type":"ContainerStarted","Data":"bb5cf0343b6bd13a71f5c7f74c1e2686d90220fe8354c4422de092015efed3e1"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.391777 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" event={"ID":"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5","Type":"ContainerStarted","Data":"ae1d3dd43412d1028433bdfbcf6d347aa29f071463e21340063aaf24f68c1104"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.404287 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" event={"ID":"83427963-071f-40a0-8988-b39a3d41e59f","Type":"ContainerStarted","Data":"66c23c64f3d021901675ec2843a8277fb0ae34c926962ac2f3cd7783af1be3d7"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.413511 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-284hg" event={"ID":"f799cad1-5a28-4af5-8070-5c365cddbf78","Type":"ContainerStarted","Data":"5c9c12c8a9ba3079222f116257011486923d4631bdb296cc7bd06e6055fa3f43"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.423189 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qc9j6" podStartSLOduration=127.423147952 podStartE2EDuration="2m7.423147952s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:42.297678872 +0000 UTC m=+148.640607141" watchObservedRunningTime="2026-02-17 14:08:42.423147952 +0000 UTC m=+148.766076222" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.424786 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9kmt4" event={"ID":"f70daa4b-d685-406a-ba3a-7fa6d672acdd","Type":"ContainerStarted","Data":"f55d51822910cf61da64c47ac1417ba9aebe9d1f79039fa69723576dadcf0637"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.424840 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9kmt4" event={"ID":"f70daa4b-d685-406a-ba3a-7fa6d672acdd","Type":"ContainerStarted","Data":"e32b19868a282e2ff1b39a9f475fd5f67a0902937d2759466c7ef84e64469cec"} Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.429225 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.473342 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.474094 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:42.974071735 +0000 UTC m=+149.316999994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.498740 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8sd2q" podStartSLOduration=127.498705819 podStartE2EDuration="2m7.498705819s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:42.426613095 +0000 UTC m=+148.769541364" watchObservedRunningTime="2026-02-17 14:08:42.498705819 +0000 UTC m=+148.841634098" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.574847 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.576194 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.076176855 +0000 UTC m=+149.419105124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.651522 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wsdcq" podStartSLOduration=127.651497005 podStartE2EDuration="2m7.651497005s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:42.520825506 +0000 UTC m=+148.863753785" watchObservedRunningTime="2026-02-17 14:08:42.651497005 +0000 UTC m=+148.994425274" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.677046 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.677458 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.177441955 +0000 UTC m=+149.520370224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.780639 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.781053 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.281030475 +0000 UTC m=+149.623958734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.808983 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" podStartSLOduration=126.808960966 podStartE2EDuration="2m6.808960966s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:42.651882526 +0000 UTC m=+148.994810795" watchObservedRunningTime="2026-02-17 14:08:42.808960966 +0000 UTC m=+149.151889235" Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.899508 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:42 crc kubenswrapper[4836]: E0217 14:08:42.899936 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.399921822 +0000 UTC m=+149.742850091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.923649 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:42 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:42 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:42 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:42 crc kubenswrapper[4836]: I0217 14:08:42.923720 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.008395 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.008745 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.5087247 +0000 UTC m=+149.851652969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.110375 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.110880 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.610854411 +0000 UTC m=+149.953782860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.143698 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jf2hd" podStartSLOduration=128.143661363 podStartE2EDuration="2m8.143661363s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:42.811934825 +0000 UTC m=+149.154863094" watchObservedRunningTime="2026-02-17 14:08:43.143661363 +0000 UTC m=+149.486589622" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.145492 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.146412 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:43 crc kubenswrapper[4836]: W0217 14:08:43.160539 4836 reflector.go:561] object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n": failed to list *v1.Secret: secrets "installer-sa-dockercfg-kjl2n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-controller-manager": no relationship found between node 'crc' and this object Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.160593 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"installer-sa-dockercfg-kjl2n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"installer-sa-dockercfg-kjl2n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:43 crc kubenswrapper[4836]: W0217 14:08:43.182449 4836 reflector.go:561] object-"openshift-kube-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager": no relationship found between node 'crc' and this object Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.182507 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.230477 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.230705 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/caed4fb3-6dd4-4427-880f-fee413854d48-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"caed4fb3-6dd4-4427-880f-fee413854d48\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.230781 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/caed4fb3-6dd4-4427-880f-fee413854d48-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"caed4fb3-6dd4-4427-880f-fee413854d48\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.230882 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.730865648 +0000 UTC m=+150.073793917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.250538 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.332145 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.332214 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/caed4fb3-6dd4-4427-880f-fee413854d48-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"caed4fb3-6dd4-4427-880f-fee413854d48\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.332352 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/caed4fb3-6dd4-4427-880f-fee413854d48-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"caed4fb3-6dd4-4427-880f-fee413854d48\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.332417 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.332453 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.335496 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/caed4fb3-6dd4-4427-880f-fee413854d48-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"caed4fb3-6dd4-4427-880f-fee413854d48\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.336000 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.835977269 +0000 UTC m=+150.178905538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.345501 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" podStartSLOduration=127.345472801 podStartE2EDuration="2m7.345472801s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:43.342689627 +0000 UTC m=+149.685617916" watchObservedRunningTime="2026-02-17 14:08:43.345472801 +0000 UTC m=+149.688401080" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.350660 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.378462 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.435909 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.436086 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.436111 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.438489 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:43.938456019 +0000 UTC m=+150.281384288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.440526 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9kmt4" podStartSLOduration=9.440510124 podStartE2EDuration="9.440510124s" podCreationTimestamp="2026-02-17 14:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:43.39518612 +0000 UTC m=+149.738114409" watchObservedRunningTime="2026-02-17 14:08:43.440510124 +0000 UTC m=+149.783438393" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.451182 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.460717 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.539444 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.539831 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.039814891 +0000 UTC m=+150.382743160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.544825 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" event={"ID":"1ecc7c98-e9a3-4850-a741-7e0bcf670e27","Type":"ContainerStarted","Data":"b4aefdd0721537ab913492aef942f5b49036dd754f7aca8276dc6e3aee6f0498"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.567730 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" event={"ID":"83427963-071f-40a0-8988-b39a3d41e59f","Type":"ContainerStarted","Data":"972e5982538d255cd8980703966e8868df496eeb602cbe64b2430afbfe05045f"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.567784 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" event={"ID":"83427963-071f-40a0-8988-b39a3d41e59f","Type":"ContainerStarted","Data":"0b77a265395504163a0cad4be47c7aef193020081ba1eef748429bf9343c0461"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.569691 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" podStartSLOduration=127.569666163 podStartE2EDuration="2m7.569666163s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:43.557280304 +0000 UTC m=+149.900208603" watchObservedRunningTime="2026-02-17 14:08:43.569666163 +0000 UTC m=+149.912594432" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.570688 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jhzxl" podStartSLOduration=127.570682711 podStartE2EDuration="2m7.570682711s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:43.48142104 +0000 UTC m=+149.824349309" watchObservedRunningTime="2026-02-17 14:08:43.570682711 +0000 UTC m=+149.913611000" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.582098 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.597935 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.598467 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.598852 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" event={"ID":"e634dd4f-5615-4dd7-ac0f-8c22a3ee7d40","Type":"ContainerStarted","Data":"6d0e2dedcae09446f7b3bfaf744b1c829817ff9ce38f5a925b95962072d0f167"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.639454 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8ngwr" podStartSLOduration=9.639417015 podStartE2EDuration="9.639417015s" podCreationTimestamp="2026-02-17 14:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:43.618024338 +0000 UTC m=+149.960952627" watchObservedRunningTime="2026-02-17 14:08:43.639417015 +0000 UTC m=+149.982345294" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.640187 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.641108 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.141083449 +0000 UTC m=+150.484011718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.743315 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.744164 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.244149766 +0000 UTC m=+150.587078035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.752671 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gtjx7" podStartSLOduration=128.752653012 podStartE2EDuration="2m8.752653012s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:43.66293236 +0000 UTC m=+150.005860649" watchObservedRunningTime="2026-02-17 14:08:43.752653012 +0000 UTC m=+150.095581291" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.770634 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjmwc" podStartSLOduration=127.770601959 podStartE2EDuration="2m7.770601959s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:43.76838463 +0000 UTC m=+150.111312909" watchObservedRunningTime="2026-02-17 14:08:43.770601959 +0000 UTC m=+150.113530228" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.790264 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" event={"ID":"56a1d7ef-ccae-4b8e-b94f-edee0ce6e902","Type":"ContainerStarted","Data":"36f97f4b465b627a7bbdc9ff6a08105f7f3504b8424041c3c5d0af5de400a7e9"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.792089 4836 generic.go:334] "Generic (PLEG): container finished" podID="66402e53-3287-45c4-bceb-78fc99836c5b" containerID="e1410cad0bdf253c1f33bcce8f3592648cbfa9dbf3714face250be327e64211e" exitCode=0 Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.792143 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" event={"ID":"66402e53-3287-45c4-bceb-78fc99836c5b","Type":"ContainerDied","Data":"e1410cad0bdf253c1f33bcce8f3592648cbfa9dbf3714face250be327e64211e"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.815232 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ld8ls" event={"ID":"b8fcc519-bc1d-4a7e-8005-cb29f435f4e5","Type":"ContainerStarted","Data":"f4b1b6ddbd3387e065362939698ff59a43d98af047d58fe3730f24961cf9d3f3"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.926481 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:43 crc kubenswrapper[4836]: E0217 14:08:43.928098 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.42807941 +0000 UTC m=+150.771007689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.975764 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:43 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:43 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:43 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.975821 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.976456 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-284hg" event={"ID":"f799cad1-5a28-4af5-8070-5c365cddbf78","Type":"ContainerStarted","Data":"a342a3e318d1021d98dd316efc76e3c2170c5c0d31cd5bd52c4f284086a9a33f"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.976507 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-284hg" event={"ID":"f799cad1-5a28-4af5-8070-5c365cddbf78","Type":"ContainerStarted","Data":"6822c77d848fcdab528abca24fc98ef79abf427180e8f5d03d60f8f9a6387c3d"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.976816 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-284hg" Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.985267 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6scjm" event={"ID":"c2f4f6fb-d604-402f-83d0-6b25781c3aa8","Type":"ContainerStarted","Data":"4e7219136a28b611d66687e06cd6aadc0123bae2d2908a16f756658233014eb9"} Feb 17 14:08:43 crc kubenswrapper[4836]: I0217 14:08:43.986856 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" event={"ID":"0fc0c6ee-c7d3-4e99-a6c2-b8de8f7ffa00","Type":"ContainerStarted","Data":"4212dc04069a7b1e36bc72e9b817419946105d0e67d1d9d67b83b9eeff6bc959"} Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.023781 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" event={"ID":"1679c4a6-a707-4150-825b-5cb8b90cb27c","Type":"ContainerStarted","Data":"82d604aef88d37b6109f35e29ebfcdc7a938ff194ec971f9fa09d7f3c1413d68"} Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.027694 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.027988 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.527976061 +0000 UTC m=+150.870904330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.143349 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.143597 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.144070 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.644049253 +0000 UTC m=+150.986977592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.144148 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.144444 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.644437614 +0000 UTC m=+150.987365883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.244738 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.246423 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.746402801 +0000 UTC m=+151.089331090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.347625 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.348034 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.848019939 +0000 UTC m=+151.190948208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.391600 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wgzvh" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.465696 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.477918 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.478248 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:44.978229776 +0000 UTC m=+151.321158055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.502834 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/caed4fb3-6dd4-4427-880f-fee413854d48-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"caed4fb3-6dd4-4427-880f-fee413854d48\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.514608 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l5kfm" podStartSLOduration=129.514588672 podStartE2EDuration="2m9.514588672s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:44.477413674 +0000 UTC m=+150.820341963" watchObservedRunningTime="2026-02-17 14:08:44.514588672 +0000 UTC m=+150.857516941" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.580503 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.581134 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:45.081107147 +0000 UTC m=+151.424035416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.707092 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.707945 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:45.207922085 +0000 UTC m=+151.550850354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.711621 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.713638 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-72n7k" podStartSLOduration=128.713598575 podStartE2EDuration="2m8.713598575s" podCreationTimestamp="2026-02-17 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:44.516359279 +0000 UTC m=+150.859287568" watchObservedRunningTime="2026-02-17 14:08:44.713598575 +0000 UTC m=+151.056526864" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.787073 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2xtf8" podStartSLOduration=129.787048085 podStartE2EDuration="2m9.787048085s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:44.786689576 +0000 UTC m=+151.129617845" watchObservedRunningTime="2026-02-17 14:08:44.787048085 +0000 UTC m=+151.129976344" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.810978 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.811346 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:45.3113292 +0000 UTC m=+151.654257469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.921837 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:44 crc kubenswrapper[4836]: E0217 14:08:44.922163 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:45.422136532 +0000 UTC m=+151.765064801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.953998 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-284hg" podStartSLOduration=10.953969828 podStartE2EDuration="10.953969828s" podCreationTimestamp="2026-02-17 14:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:44.874000985 +0000 UTC m=+151.216929264" watchObservedRunningTime="2026-02-17 14:08:44.953969828 +0000 UTC m=+151.296898117" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.965232 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:44 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:44 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:44 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.965403 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.982344 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9w8zr"] Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.983609 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:44 crc kubenswrapper[4836]: I0217 14:08:44.990652 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.025004 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-utilities\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.025117 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.025167 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-catalog-content\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.025203 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tknx6\" (UniqueName: \"kubernetes.io/projected/089d1289-afe9-4ffe-9d96-ac10058335ed-kube-api-access-tknx6\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.025658 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:45.52563642 +0000 UTC m=+151.868564689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.027277 4836 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x2x76 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.027413 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" podUID="c758606a-b3e4-494e-a2a6-7a7320277b37" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.032656 4836 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2rnsr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.032769 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" podUID="c26f912f-f640-4b4c-ab61-dd2a163f12ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.129981 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.130309 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-utilities\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.130413 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-catalog-content\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.130450 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tknx6\" (UniqueName: \"kubernetes.io/projected/089d1289-afe9-4ffe-9d96-ac10058335ed-kube-api-access-tknx6\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.130985 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:45.630962067 +0000 UTC m=+151.973890336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.131900 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-utilities\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.132049 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-catalog-content\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.268935 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.270565 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:45.770551753 +0000 UTC m=+152.113480022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.348306 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tknx6\" (UniqueName: \"kubernetes.io/projected/089d1289-afe9-4ffe-9d96-ac10058335ed-kube-api-access-tknx6\") pod \"community-operators-9w8zr\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.374939 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.375430 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:45.875409857 +0000 UTC m=+152.218338126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.594370 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.594865 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.094852643 +0000 UTC m=+152.437780912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.622189 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.632445 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vfmw4"] Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.634087 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: W0217 14:08:45.639144 4836 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.639207 4836 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.663677 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9w8zr"] Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.672974 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tmpvx"] Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.674588 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.696316 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.696505 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd5dt\" (UniqueName: \"kubernetes.io/projected/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-kube-api-access-xd5dt\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.696577 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fk4b\" (UniqueName: \"kubernetes.io/projected/c6c873c6-ddde-4b9b-9141-e6de9be567d4-kube-api-access-7fk4b\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.696599 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-catalog-content\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.696640 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-catalog-content\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.696668 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-utilities\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.696710 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-utilities\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.696826 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.19681092 +0000 UTC m=+152.539739189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.799901 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd5dt\" (UniqueName: \"kubernetes.io/projected/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-kube-api-access-xd5dt\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.800001 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.800050 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fk4b\" (UniqueName: \"kubernetes.io/projected/c6c873c6-ddde-4b9b-9141-e6de9be567d4-kube-api-access-7fk4b\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.800100 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-catalog-content\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.800138 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-catalog-content\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.800182 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-utilities\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.800205 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-utilities\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.800828 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-utilities\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.801762 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.301747606 +0000 UTC m=+152.644675875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.802210 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-catalog-content\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.802578 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-catalog-content\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.802891 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-utilities\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.941394 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:45 crc kubenswrapper[4836]: E0217 14:08:45.941925 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.441900127 +0000 UTC m=+152.784828396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.950954 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:45 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:45 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:45 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:45 crc kubenswrapper[4836]: I0217 14:08:45.951027 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.019021 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cmk55"] Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.020235 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.043267 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.043805 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.543782993 +0000 UTC m=+152.886711322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.097753 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmpvx"] Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.144901 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.145451 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-catalog-content\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.145491 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-utilities\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.145534 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwlv\" (UniqueName: \"kubernetes.io/projected/f1bd4ed0-3b99-4446-9218-71bb589da4a4-kube-api-access-fmwlv\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.145682 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.645662538 +0000 UTC m=+152.988590807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.175537 4836 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2rnsr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.175607 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" podUID="c26f912f-f640-4b4c-ab61-dd2a163f12ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.194782 4836 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x2x76 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": context deadline exceeded" start-of-body= Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.194828 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" podUID="c758606a-b3e4-494e-a2a6-7a7320277b37" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": context deadline exceeded" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.200208 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmk55"] Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.203733 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fk4b\" (UniqueName: \"kubernetes.io/projected/c6c873c6-ddde-4b9b-9141-e6de9be567d4-kube-api-access-7fk4b\") pod \"community-operators-tmpvx\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.280238 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-catalog-content\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.280532 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-utilities\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.280566 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwlv\" (UniqueName: \"kubernetes.io/projected/f1bd4ed0-3b99-4446-9218-71bb589da4a4-kube-api-access-fmwlv\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.280590 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.280887 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.780875477 +0000 UTC m=+153.123803746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.281246 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-catalog-content\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.281477 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-utilities\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.288277 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfmw4"] Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.315014 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd5dt\" (UniqueName: \"kubernetes.io/projected/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-kube-api-access-xd5dt\") pod \"certified-operators-vfmw4\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.352209 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.381522 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.382713 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.88269161 +0000 UTC m=+153.225619879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.386367 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwlv\" (UniqueName: \"kubernetes.io/projected/f1bd4ed0-3b99-4446-9218-71bb589da4a4-kube-api-access-fmwlv\") pod \"certified-operators-cmk55\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.490187 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.490627 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:46.990615256 +0000 UTC m=+153.333543525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.530966 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.531050 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.531130 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.531149 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.531417 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" event={"ID":"66402e53-3287-45c4-bceb-78fc99836c5b","Type":"ContainerStarted","Data":"aa94ece838f84b94397713060dcb950b0f6669b05f4564f61741a418f75afa9f"} Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.595874 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.09584118 +0000 UTC m=+153.438769449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.626348 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.626821 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.627393 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.127330636 +0000 UTC m=+153.470258905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.730894 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.732452 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.232425757 +0000 UTC m=+153.575354026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.837145 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.837522 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.337507607 +0000 UTC m=+153.680435876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.904559 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" Feb 17 14:08:46 crc kubenswrapper[4836]: I0217 14:08:46.972481 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:46 crc kubenswrapper[4836]: E0217 14:08:46.973856 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.473836916 +0000 UTC m=+153.816765185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.039908 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.041314 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.052177 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.077773 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.078162 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.578150266 +0000 UTC m=+153.921078525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.087572 4836 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2rnsr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.087666 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2rnsr" podUID="c26f912f-f640-4b4c-ab61-dd2a163f12ab" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.087796 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:47 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:47 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:47 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.087869 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.179899 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.180005 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.67998565 +0000 UTC m=+154.022913919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.180207 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.180490 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.680483933 +0000 UTC m=+154.023412202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.252145 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.265044 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.272779 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwhr"] Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.274221 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.281281 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.281617 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.281742 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.781721171 +0000 UTC m=+154.124649440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.329495 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwhr"] Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.383354 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-catalog-content\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.383427 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqlz\" (UniqueName: \"kubernetes.io/projected/e9f23804-837d-4d3c-94b7-7cdefe6e94df-kube-api-access-hqqlz\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.383458 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.383478 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-utilities\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.384810 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.884794868 +0000 UTC m=+154.227723137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.484801 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.485005 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-catalog-content\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.485059 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqlz\" (UniqueName: \"kubernetes.io/projected/e9f23804-837d-4d3c-94b7-7cdefe6e94df-kube-api-access-hqqlz\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.485093 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-utilities\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.485770 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-utilities\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.486007 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-catalog-content\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.486070 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:47.986055876 +0000 UTC m=+154.328984145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.584751 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5kqqh"] Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.585923 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.586310 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.586771 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:48.0867566 +0000 UTC m=+154.429684869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.609555 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kqqh"] Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.616260 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqlz\" (UniqueName: \"kubernetes.io/projected/e9f23804-837d-4d3c-94b7-7cdefe6e94df-kube-api-access-hqqlz\") pod \"redhat-marketplace-pxwhr\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.688825 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.689060 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-utilities\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.689088 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8k6d\" (UniqueName: \"kubernetes.io/projected/c85860a6-c3bb-448b-b812-cbf38230de01-kube-api-access-t8k6d\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.689164 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-catalog-content\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.689893 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:48.189871118 +0000 UTC m=+154.532799387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.814171 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-utilities\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.814220 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8k6d\" (UniqueName: \"kubernetes.io/projected/c85860a6-c3bb-448b-b812-cbf38230de01-kube-api-access-t8k6d\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.814271 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-catalog-content\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.814324 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.814695 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:48.314678221 +0000 UTC m=+154.657606490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.817908 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-catalog-content\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.823795 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-utilities\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.868580 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.916456 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8k6d\" (UniqueName: \"kubernetes.io/projected/c85860a6-c3bb-448b-b812-cbf38230de01-kube-api-access-t8k6d\") pod \"redhat-marketplace-5kqqh\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.917283 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.918990 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:47 crc kubenswrapper[4836]: E0217 14:08:47.920132 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:48.4201135 +0000 UTC m=+154.763041769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.947803 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:47 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:47 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:47 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:47 crc kubenswrapper[4836]: I0217 14:08:47.948045 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.020064 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:48 crc kubenswrapper[4836]: E0217 14:08:48.020417 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:48.520405073 +0000 UTC m=+154.863333342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.072701 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.072742 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.101424 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-252vj"] Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.102684 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.108642 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.124995 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.125166 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-catalog-content\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.125366 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thgpf\" (UniqueName: \"kubernetes.io/projected/a172042c-7dc6-4cea-906e-3d9135523f15-kube-api-access-thgpf\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.125392 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-utilities\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: E0217 14:08:48.126361 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:48.626279734 +0000 UTC m=+154.969208013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.133464 4836 patch_prober.go:28] interesting pod/console-f9d7485db-6zspj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.133535 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6zspj" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.134961 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-252vj"] Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.143842 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.171605 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x2x76" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.229664 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-utilities\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.229705 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thgpf\" (UniqueName: \"kubernetes.io/projected/a172042c-7dc6-4cea-906e-3d9135523f15-kube-api-access-thgpf\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.229762 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-catalog-content\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.229808 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:48 crc kubenswrapper[4836]: E0217 14:08:48.230185 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:48.730171103 +0000 UTC m=+155.073099372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.230350 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-utilities\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.230837 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-catalog-content\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.295989 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.308392 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thgpf\" (UniqueName: \"kubernetes.io/projected/a172042c-7dc6-4cea-906e-3d9135523f15-kube-api-access-thgpf\") pod \"redhat-operators-252vj\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.336956 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:48 crc kubenswrapper[4836]: E0217 14:08:48.337955 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:48.837924534 +0000 UTC m=+155.180852803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.433982 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.484250 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.632862 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:48 crc kubenswrapper[4836]: E0217 14:08:48.633654 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:49.133637865 +0000 UTC m=+155.476566144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.798794 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:48 crc kubenswrapper[4836]: E0217 14:08:48.810015 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:49.309978607 +0000 UTC m=+155.652906876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.823988 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"01b4a0bc4233e480d2a5e4ad0e393206ecbab9a812054b85c0b53ea0a3483348"} Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.824041 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9w8zr"] Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.824718 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:48 crc kubenswrapper[4836]: E0217 14:08:48.825126 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:49.325108999 +0000 UTC m=+155.668037268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.826406 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"caed4fb3-6dd4-4427-880f-fee413854d48","Type":"ContainerStarted","Data":"b0ae66c7e07c61466cd3f90b98740cfd0b7ef75ac524fdbc34cb7a0d3e897bbf"} Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.906978 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" event={"ID":"66402e53-3287-45c4-bceb-78fc99836c5b","Type":"ContainerStarted","Data":"170dd031ad3e600b8c3e288bbcdf262adb7b37fc3c878cc3458e081c8193a929"} Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.940127 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:48 crc kubenswrapper[4836]: E0217 14:08:48.941312 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:49.441260792 +0000 UTC m=+155.784189121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.952946 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:48 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:48 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:48 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:48 crc kubenswrapper[4836]: I0217 14:08:48.953005 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.005848 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"988fe3fef96881ecf0d201e42800c566c722a81d012c8d57dd9b9480fe9595c1"} Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.012328 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6d6e2e0928ae5ceb546a64e10f83c1dcdbf16201e96a338b090459780c5233b8"} Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.032851 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5rfnm"] Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.034506 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.061868 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6h7n" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.062808 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-catalog-content\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.062828 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-utilities\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.062943 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.062975 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnjxf\" (UniqueName: \"kubernetes.io/projected/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-kube-api-access-jnjxf\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.066098 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:49.566085697 +0000 UTC m=+155.909013966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.163831 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.164118 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-catalog-content\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.164149 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-utilities\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.164270 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnjxf\" (UniqueName: \"kubernetes.io/projected/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-kube-api-access-jnjxf\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.164835 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-catalog-content\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.164922 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:49.6649024 +0000 UTC m=+156.007830669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.165163 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-utilities\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.218065 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5rfnm"] Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.228481 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.229284 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.300906 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.301332 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.301407 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.301811 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:49.801797965 +0000 UTC m=+156.144726234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.340490 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnjxf\" (UniqueName: \"kubernetes.io/projected/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-kube-api-access-jnjxf\") pod \"redhat-operators-5rfnm\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.356257 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.398926 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.403102 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.403397 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.403535 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.403642 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.403715 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:49.90368377 +0000 UTC m=+156.246612049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.439897 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmpvx"] Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.448093 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.456823 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.478656 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" podStartSLOduration=134.47862619 podStartE2EDuration="2m14.47862619s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:49.443491137 +0000 UTC m=+155.786419416" watchObservedRunningTime="2026-02-17 14:08:49.47862619 +0000 UTC m=+155.821554489" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.481867 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.507533 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.507874 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.007860486 +0000 UTC m=+156.350788765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.508277 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.601371 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmk55"] Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.608431 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.608768 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.108752605 +0000 UTC m=+156.451680884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.611858 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfmw4"] Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.714446 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.715251 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.215234572 +0000 UTC m=+156.558162851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.830526 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.831000 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.330982895 +0000 UTC m=+156.673911164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.946150 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:49 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:49 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:49 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.946204 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.947109 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:49 crc kubenswrapper[4836]: E0217 14:08:49.947452 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.447437697 +0000 UTC m=+156.790365966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:49 crc kubenswrapper[4836]: I0217 14:08:49.975827 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kqqh"] Feb 17 14:08:50 crc kubenswrapper[4836]: W0217 14:08:50.035655 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8762f2f2_8375_4fdd_8a29_ea2ab598afa1.slice/crio-a5a3c8cb6babc233f1c2ac1e8dd3635788628a1cb1bb705f8a779b47d0562e2b WatchSource:0}: Error finding container a5a3c8cb6babc233f1c2ac1e8dd3635788628a1cb1bb705f8a779b47d0562e2b: Status 404 returned error can't find the container with id a5a3c8cb6babc233f1c2ac1e8dd3635788628a1cb1bb705f8a779b47d0562e2b Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.039390 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwhr"] Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.052284 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:50 crc kubenswrapper[4836]: E0217 14:08:50.052593 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.552575578 +0000 UTC m=+156.895503847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.135731 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmpvx" event={"ID":"c6c873c6-ddde-4b9b-9141-e6de9be567d4","Type":"ContainerStarted","Data":"79e0157c4fae70c4a163e7552bd45039fe6e084cf3fa63db4fbd428401695df6"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.145996 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8zr" event={"ID":"089d1289-afe9-4ffe-9d96-ac10058335ed","Type":"ContainerStarted","Data":"aec4a035ba778cf216a49780b8ffa622c813a3d3daa4a826e68b03c1acc34c4d"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.154068 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:50 crc kubenswrapper[4836]: W0217 14:08:50.154538 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9f23804_837d_4d3c_94b7_7cdefe6e94df.slice/crio-aef23167292c1dafb12389117081e87d3bd5bee8abc67ecc65bf3cd0a4bf9f1c WatchSource:0}: Error finding container aef23167292c1dafb12389117081e87d3bd5bee8abc67ecc65bf3cd0a4bf9f1c: Status 404 returned error can't find the container with id aef23167292c1dafb12389117081e87d3bd5bee8abc67ecc65bf3cd0a4bf9f1c Feb 17 14:08:50 crc kubenswrapper[4836]: E0217 14:08:50.154777 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.654762412 +0000 UTC m=+156.997690691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.161014 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kqqh" event={"ID":"c85860a6-c3bb-448b-b812-cbf38230de01","Type":"ContainerStarted","Data":"f9efa614ea777c6c1f7f2234c739bb0e406ce4096c5477be16d8aba1cfb4c85e"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.249312 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfmw4" event={"ID":"8762f2f2-8375-4fdd-8a29-ea2ab598afa1","Type":"ContainerStarted","Data":"a5a3c8cb6babc233f1c2ac1e8dd3635788628a1cb1bb705f8a779b47d0562e2b"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.257553 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:50 crc kubenswrapper[4836]: E0217 14:08:50.257651 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.757614383 +0000 UTC m=+157.100542652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.257794 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:50 crc kubenswrapper[4836]: E0217 14:08:50.258171 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.758156697 +0000 UTC m=+157.101084966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.299857 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5279ed1ec23ad195ee96d2ed7467f460d22af9b931e9b620c8ade5d19327e788"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.301128 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.303566 4836 generic.go:334] "Generic (PLEG): container finished" podID="91eb437c-beea-4f2d-b3f7-505b87fe6dee" containerID="4b8580f44aade0425b4de34e0f49d07bd6192e526f9c10aa11b53556a3546660" exitCode=0 Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.303694 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" event={"ID":"91eb437c-beea-4f2d-b3f7-505b87fe6dee","Type":"ContainerDied","Data":"4b8580f44aade0425b4de34e0f49d07bd6192e526f9c10aa11b53556a3546660"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.305464 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"caed4fb3-6dd4-4427-880f-fee413854d48","Type":"ContainerStarted","Data":"894f04d8bb69cf58bea3ab4206057ab2e51ebe330575f3a10c3c1c616fcfa44c"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.307614 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmk55" event={"ID":"f1bd4ed0-3b99-4446-9218-71bb589da4a4","Type":"ContainerStarted","Data":"6e75d917f9b18c07b2feade7d6ceab556bb6226e0a78e8a3d47b72928e406bad"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.415671 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:50 crc kubenswrapper[4836]: E0217 14:08:50.416139 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:50.916118871 +0000 UTC m=+157.259047140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.517639 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:50 crc kubenswrapper[4836]: E0217 14:08:50.520305 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:51.020211364 +0000 UTC m=+157.363139633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.653233 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.769806 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=7.76976773 podStartE2EDuration="7.76976773s" podCreationTimestamp="2026-02-17 14:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:50.759142848 +0000 UTC m=+157.102071127" watchObservedRunningTime="2026-02-17 14:08:50.76976773 +0000 UTC m=+157.112695999" Feb 17 14:08:50 crc kubenswrapper[4836]: E0217 14:08:50.815050 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 14:08:51.314985521 +0000 UTC m=+157.657913790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.815981 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.817636 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6scjm" event={"ID":"c2f4f6fb-d604-402f-83d0-6b25781c3aa8","Type":"ContainerStarted","Data":"d7db103eb0f05f6770e34d461544f70605661f51228b5fac894fae8ee978b438"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.817671 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"53956ee709ceed9511b467ea1bb33e2e4cb24195855b9addf992d2f901ce9683"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.817686 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-252vj"] Feb 17 14:08:50 crc kubenswrapper[4836]: E0217 14:08:50.817977 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 14:08:51.31796831 +0000 UTC m=+157.660896579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5vhz9" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.838460 4836 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.852383 4836 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T14:08:50.838493506Z","Handler":null,"Name":""} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.855543 4836 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.855570 4836 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.856569 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3c737909227f0810cdfcb9e38b0fa4e5dbbb9bdbeb1deb19a77ed4f06c928a68"} Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.925317 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 14:08:50 crc kubenswrapper[4836]: I0217 14:08:50.935382 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.026960 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.053283 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.053334 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.089597 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:51 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:51 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:51 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.089650 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.287746 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5vhz9\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.342835 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.387041 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5rfnm"] Feb 17 14:08:51 crc kubenswrapper[4836]: W0217 14:08:51.404397 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88cf2bb1_d70f_4b82_9b9a_9d7c7a4244ff.slice/crio-f4532e92cda0f4cb49095bb6a57a64c5099b9a9601e8744de1408686bb81a1cb WatchSource:0}: Error finding container f4532e92cda0f4cb49095bb6a57a64c5099b9a9601e8744de1408686bb81a1cb: Status 404 returned error can't find the container with id f4532e92cda0f4cb49095bb6a57a64c5099b9a9601e8744de1408686bb81a1cb Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.441850 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.915699 4836 generic.go:334] "Generic (PLEG): container finished" podID="c85860a6-c3bb-448b-b812-cbf38230de01" containerID="d36870560f8d1243c818dca57cf74dea7a07e8c43795bb396db32ccfc2a302b6" exitCode=0 Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.916428 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kqqh" event={"ID":"c85860a6-c3bb-448b-b812-cbf38230de01","Type":"ContainerDied","Data":"d36870560f8d1243c818dca57cf74dea7a07e8c43795bb396db32ccfc2a302b6"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.921256 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:51 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:51 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:51 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.921593 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.925900 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"895e5f35-c3c6-46b6-878c-6d9a47b6221f","Type":"ContainerStarted","Data":"4866eba95fc74299a5d4d267763f0b47fa1876ffea3c2307e4ea9572f0fa5ed5"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.925941 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"895e5f35-c3c6-46b6-878c-6d9a47b6221f","Type":"ContainerStarted","Data":"9a71104ba91fe474c5ec1895a885d742689d99f71786a709bb26ffc4e5fce4b7"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.926773 4836 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.928539 4836 generic.go:334] "Generic (PLEG): container finished" podID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerID="f5f1510b84a48fd765ca27386941284d20f6da0225cb6c655223588a86aa6f8f" exitCode=0 Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.928606 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8zr" event={"ID":"089d1289-afe9-4ffe-9d96-ac10058335ed","Type":"ContainerDied","Data":"f5f1510b84a48fd765ca27386941284d20f6da0225cb6c655223588a86aa6f8f"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.932577 4836 generic.go:334] "Generic (PLEG): container finished" podID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerID="8aece0956593a85800757e782bdc3eb1d3d87f1ac99e3fc8ce9f7012a48be219" exitCode=0 Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.932660 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmpvx" event={"ID":"c6c873c6-ddde-4b9b-9141-e6de9be567d4","Type":"ContainerDied","Data":"8aece0956593a85800757e782bdc3eb1d3d87f1ac99e3fc8ce9f7012a48be219"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.942366 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5vhz9"] Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.961250 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6scjm" event={"ID":"c2f4f6fb-d604-402f-83d0-6b25781c3aa8","Type":"ContainerStarted","Data":"c23147920100542e35ac81853accaec49db8552b462d160596b4b74afffcf2a6"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.974062 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rfnm" event={"ID":"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff","Type":"ContainerStarted","Data":"6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.974111 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rfnm" event={"ID":"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff","Type":"ContainerStarted","Data":"f4532e92cda0f4cb49095bb6a57a64c5099b9a9601e8744de1408686bb81a1cb"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.977283 4836 generic.go:334] "Generic (PLEG): container finished" podID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerID="73060b123dbcdb54cacfb96235e77305156ac3a055b89a97013a4725f13fbc92" exitCode=0 Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.977784 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwhr" event={"ID":"e9f23804-837d-4d3c-94b7-7cdefe6e94df","Type":"ContainerDied","Data":"73060b123dbcdb54cacfb96235e77305156ac3a055b89a97013a4725f13fbc92"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.977829 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwhr" event={"ID":"e9f23804-837d-4d3c-94b7-7cdefe6e94df","Type":"ContainerStarted","Data":"aef23167292c1dafb12389117081e87d3bd5bee8abc67ecc65bf3cd0a4bf9f1c"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.978782 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.9787661 podStartE2EDuration="2.9787661s" podCreationTimestamp="2026-02-17 14:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:51.95279714 +0000 UTC m=+158.295725419" watchObservedRunningTime="2026-02-17 14:08:51.9787661 +0000 UTC m=+158.321694369" Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.982373 4836 generic.go:334] "Generic (PLEG): container finished" podID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerID="fdc430f3f9d22a422de0b99423af704e6cc0b0c2a36fc9623c6db36600886e79" exitCode=0 Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.982440 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfmw4" event={"ID":"8762f2f2-8375-4fdd-8a29-ea2ab598afa1","Type":"ContainerDied","Data":"fdc430f3f9d22a422de0b99423af704e6cc0b0c2a36fc9623c6db36600886e79"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.988135 4836 generic.go:334] "Generic (PLEG): container finished" podID="a172042c-7dc6-4cea-906e-3d9135523f15" containerID="fbdef3e9d702e26b2d9eab100a7cb39741759b5bc646072d63aa2cde6951ee43" exitCode=0 Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.988218 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-252vj" event={"ID":"a172042c-7dc6-4cea-906e-3d9135523f15","Type":"ContainerDied","Data":"fbdef3e9d702e26b2d9eab100a7cb39741759b5bc646072d63aa2cde6951ee43"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.988246 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-252vj" event={"ID":"a172042c-7dc6-4cea-906e-3d9135523f15","Type":"ContainerStarted","Data":"8ef482fc8eb2712be43ba1d606607d7a887e18d38349afed73ed063a65b62543"} Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.996656 4836 generic.go:334] "Generic (PLEG): container finished" podID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerID="9679e4b4c4f0f644eb56ce9a9ac7ad7178d79f35bc0d94642d6b2ded1809a114" exitCode=0 Feb 17 14:08:51 crc kubenswrapper[4836]: I0217 14:08:51.996719 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmk55" event={"ID":"f1bd4ed0-3b99-4446-9218-71bb589da4a4","Type":"ContainerDied","Data":"9679e4b4c4f0f644eb56ce9a9ac7ad7178d79f35bc0d94642d6b2ded1809a114"} Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.594447 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.853183 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.923036 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:52 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:52 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:52 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.923092 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.935524 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91eb437c-beea-4f2d-b3f7-505b87fe6dee-secret-volume\") pod \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.935587 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kdnd\" (UniqueName: \"kubernetes.io/projected/91eb437c-beea-4f2d-b3f7-505b87fe6dee-kube-api-access-5kdnd\") pod \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.935727 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91eb437c-beea-4f2d-b3f7-505b87fe6dee-config-volume\") pod \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\" (UID: \"91eb437c-beea-4f2d-b3f7-505b87fe6dee\") " Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.936732 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91eb437c-beea-4f2d-b3f7-505b87fe6dee-config-volume" (OuterVolumeSpecName: "config-volume") pod "91eb437c-beea-4f2d-b3f7-505b87fe6dee" (UID: "91eb437c-beea-4f2d-b3f7-505b87fe6dee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.965367 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91eb437c-beea-4f2d-b3f7-505b87fe6dee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "91eb437c-beea-4f2d-b3f7-505b87fe6dee" (UID: "91eb437c-beea-4f2d-b3f7-505b87fe6dee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:52 crc kubenswrapper[4836]: I0217 14:08:52.973060 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91eb437c-beea-4f2d-b3f7-505b87fe6dee-kube-api-access-5kdnd" (OuterVolumeSpecName: "kube-api-access-5kdnd") pod "91eb437c-beea-4f2d-b3f7-505b87fe6dee" (UID: "91eb437c-beea-4f2d-b3f7-505b87fe6dee"). InnerVolumeSpecName "kube-api-access-5kdnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.036994 4836 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91eb437c-beea-4f2d-b3f7-505b87fe6dee-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.037040 4836 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91eb437c-beea-4f2d-b3f7-505b87fe6dee-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.037056 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kdnd\" (UniqueName: \"kubernetes.io/projected/91eb437c-beea-4f2d-b3f7-505b87fe6dee-kube-api-access-5kdnd\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.064813 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6scjm" event={"ID":"c2f4f6fb-d604-402f-83d0-6b25781c3aa8","Type":"ContainerStarted","Data":"5545da0ab4aa419e6b59b3baf5274c6959da6ac1e41ca507d90e592cd6ad25c6"} Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.068590 4836 generic.go:334] "Generic (PLEG): container finished" podID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerID="6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8" exitCode=0 Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.068660 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rfnm" event={"ID":"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff","Type":"ContainerDied","Data":"6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8"} Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.070373 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" event={"ID":"91eb437c-beea-4f2d-b3f7-505b87fe6dee","Type":"ContainerDied","Data":"d146dcdb5314cbe00f41e751cbb49254cf23ab6e8cdc06caed3ce29aac7230d2"} Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.070394 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d146dcdb5314cbe00f41e751cbb49254cf23ab6e8cdc06caed3ce29aac7230d2" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.070463 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.085108 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" event={"ID":"4cd3f585-c95f-43ee-962c-ea33aff90415","Type":"ContainerStarted","Data":"bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6"} Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.085161 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" event={"ID":"4cd3f585-c95f-43ee-962c-ea33aff90415","Type":"ContainerStarted","Data":"b92bf709add22f9c57e92a26debc7c9604b5ddd76791fbcef0b8821c381eba8e"} Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.085206 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.088539 4836 generic.go:334] "Generic (PLEG): container finished" podID="caed4fb3-6dd4-4427-880f-fee413854d48" containerID="894f04d8bb69cf58bea3ab4206057ab2e51ebe330575f3a10c3c1c616fcfa44c" exitCode=0 Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.088615 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"caed4fb3-6dd4-4427-880f-fee413854d48","Type":"ContainerDied","Data":"894f04d8bb69cf58bea3ab4206057ab2e51ebe330575f3a10c3c1c616fcfa44c"} Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.105360 4836 generic.go:334] "Generic (PLEG): container finished" podID="895e5f35-c3c6-46b6-878c-6d9a47b6221f" containerID="4866eba95fc74299a5d4d267763f0b47fa1876ffea3c2307e4ea9572f0fa5ed5" exitCode=0 Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.106055 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"895e5f35-c3c6-46b6-878c-6d9a47b6221f","Type":"ContainerDied","Data":"4866eba95fc74299a5d4d267763f0b47fa1876ffea3c2307e4ea9572f0fa5ed5"} Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.161621 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6scjm" podStartSLOduration=19.161590515 podStartE2EDuration="19.161590515s" podCreationTimestamp="2026-02-17 14:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:53.16104461 +0000 UTC m=+159.503972899" watchObservedRunningTime="2026-02-17 14:08:53.161590515 +0000 UTC m=+159.504518784" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.264158 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" podStartSLOduration=138.264112867 podStartE2EDuration="2m18.264112867s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:08:53.263826959 +0000 UTC m=+159.606755238" watchObservedRunningTime="2026-02-17 14:08:53.264112867 +0000 UTC m=+159.607041156" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.306032 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-284hg" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.617719 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.618285 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.941522 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:53 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:53 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:53 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:53 crc kubenswrapper[4836]: I0217 14:08:53.941596 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:54 crc kubenswrapper[4836]: I0217 14:08:54.072099 4836 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cnq25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 14:08:54 crc kubenswrapper[4836]: [+]log ok Feb 17 14:08:54 crc kubenswrapper[4836]: [+]etcd ok Feb 17 14:08:54 crc kubenswrapper[4836]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 14:08:54 crc kubenswrapper[4836]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 14:08:54 crc kubenswrapper[4836]: [+]poststarthook/max-in-flight-filter ok Feb 17 14:08:54 crc kubenswrapper[4836]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 14:08:54 crc kubenswrapper[4836]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 17 14:08:54 crc kubenswrapper[4836]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 17 14:08:54 crc kubenswrapper[4836]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 17 14:08:54 crc kubenswrapper[4836]: [+]poststarthook/project.openshift.io-projectcache ok Feb 17 14:08:54 crc kubenswrapper[4836]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 17 14:08:54 crc kubenswrapper[4836]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 17 14:08:54 crc kubenswrapper[4836]: [-]poststarthook/openshift.io-restmapperupdater failed: reason withheld Feb 17 14:08:54 crc kubenswrapper[4836]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 14:08:54 crc kubenswrapper[4836]: livez check failed Feb 17 14:08:54 crc kubenswrapper[4836]: I0217 14:08:54.072222 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" podUID="66402e53-3287-45c4-bceb-78fc99836c5b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:54 crc kubenswrapper[4836]: I0217 14:08:54.929020 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:54 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:54 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:54 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:54 crc kubenswrapper[4836]: I0217 14:08:54.929121 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:55 crc kubenswrapper[4836]: I0217 14:08:55.634393 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:55 crc kubenswrapper[4836]: I0217 14:08:55.692534 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kube-api-access\") pod \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\" (UID: \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\") " Feb 17 14:08:55 crc kubenswrapper[4836]: I0217 14:08:55.692910 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kubelet-dir\") pod \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\" (UID: \"895e5f35-c3c6-46b6-878c-6d9a47b6221f\") " Feb 17 14:08:55 crc kubenswrapper[4836]: I0217 14:08:55.693449 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "895e5f35-c3c6-46b6-878c-6d9a47b6221f" (UID: "895e5f35-c3c6-46b6-878c-6d9a47b6221f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:08:55 crc kubenswrapper[4836]: I0217 14:08:55.807121 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "895e5f35-c3c6-46b6-878c-6d9a47b6221f" (UID: "895e5f35-c3c6-46b6-878c-6d9a47b6221f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:55 crc kubenswrapper[4836]: I0217 14:08:55.807618 4836 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:55 crc kubenswrapper[4836]: I0217 14:08:55.807644 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/895e5f35-c3c6-46b6-878c-6d9a47b6221f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.020360 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:56 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:56 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:56 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.020495 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.084237 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.199999 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/caed4fb3-6dd4-4427-880f-fee413854d48-kube-api-access\") pod \"caed4fb3-6dd4-4427-880f-fee413854d48\" (UID: \"caed4fb3-6dd4-4427-880f-fee413854d48\") " Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.200064 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/caed4fb3-6dd4-4427-880f-fee413854d48-kubelet-dir\") pod \"caed4fb3-6dd4-4427-880f-fee413854d48\" (UID: \"caed4fb3-6dd4-4427-880f-fee413854d48\") " Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.200449 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caed4fb3-6dd4-4427-880f-fee413854d48-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "caed4fb3-6dd4-4427-880f-fee413854d48" (UID: "caed4fb3-6dd4-4427-880f-fee413854d48"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.265912 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caed4fb3-6dd4-4427-880f-fee413854d48-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "caed4fb3-6dd4-4427-880f-fee413854d48" (UID: "caed4fb3-6dd4-4427-880f-fee413854d48"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.337731 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/caed4fb3-6dd4-4427-880f-fee413854d48-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.337856 4836 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/caed4fb3-6dd4-4427-880f-fee413854d48-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.415427 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.415458 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.415525 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.415600 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.449132 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"caed4fb3-6dd4-4427-880f-fee413854d48","Type":"ContainerDied","Data":"b0ae66c7e07c61466cd3f90b98740cfd0b7ef75ac524fdbc34cb7a0d3e897bbf"} Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.449185 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ae66c7e07c61466cd3f90b98740cfd0b7ef75ac524fdbc34cb7a0d3e897bbf" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.449215 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.523986 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"895e5f35-c3c6-46b6-878c-6d9a47b6221f","Type":"ContainerDied","Data":"9a71104ba91fe474c5ec1895a885d742689d99f71786a709bb26ffc4e5fce4b7"} Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.524028 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a71104ba91fe474c5ec1895a885d742689d99f71786a709bb26ffc4e5fce4b7" Feb 17 14:08:56 crc kubenswrapper[4836]: I0217 14:08:56.524143 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 14:08:57 crc kubenswrapper[4836]: I0217 14:08:57.049888 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:57 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:57 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:57 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:57 crc kubenswrapper[4836]: I0217 14:08:57.050525 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:57 crc kubenswrapper[4836]: I0217 14:08:57.927573 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:08:57 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:08:57 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:08:57 crc kubenswrapper[4836]: healthz check failed Feb 17 14:08:57 crc kubenswrapper[4836]: I0217 14:08:57.927636 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:08:58 crc kubenswrapper[4836]: I0217 14:08:58.030590 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:58 crc kubenswrapper[4836]: I0217 14:08:58.052569 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c-metrics-certs\") pod \"network-metrics-daemon-c4txt\" (UID: \"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c\") " pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:58 crc kubenswrapper[4836]: I0217 14:08:58.068379 4836 patch_prober.go:28] interesting pod/console-f9d7485db-6zspj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 17 14:08:58 crc kubenswrapper[4836]: I0217 14:08:58.068487 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6zspj" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 17 14:08:58 crc kubenswrapper[4836]: I0217 14:08:58.311079 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c4txt" Feb 17 14:08:58 crc kubenswrapper[4836]: I0217 14:08:58.708760 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:08:58 crc kubenswrapper[4836]: I0217 14:08:58.785894 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cnq25" Feb 17 14:09:00 crc kubenswrapper[4836]: I0217 14:09:00.791146 4836 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded" start-of-body= Feb 17 14:09:00 crc kubenswrapper[4836]: I0217 14:09:00.791609 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded" Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.003164 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:01 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:01 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:01 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.003448 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.003530 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.005767 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.010507 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:01 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:01 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:01 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.010578 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.947008 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c4txt"] Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.952509 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:01 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:01 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:01 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:01 crc kubenswrapper[4836]: I0217 14:09:01.952575 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:02 crc kubenswrapper[4836]: I0217 14:09:02.921520 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:02 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:02 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:02 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:02 crc kubenswrapper[4836]: I0217 14:09:02.921682 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:03 crc kubenswrapper[4836]: I0217 14:09:03.279174 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c4txt" event={"ID":"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c","Type":"ContainerStarted","Data":"016aa4d7249177ed49714a2acb840cce0bfb12481beb7a8fa1a30cc84f4bbaa2"} Feb 17 14:09:03 crc kubenswrapper[4836]: I0217 14:09:03.920327 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:03 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:03 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:03 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:03 crc kubenswrapper[4836]: I0217 14:09:03.920387 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:04 crc kubenswrapper[4836]: I0217 14:09:04.339826 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c4txt" event={"ID":"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c","Type":"ContainerStarted","Data":"4b8910ff1472227e3b9e3d130a0d5ea1b05c5c942dff038408499b0c5bd79471"} Feb 17 14:09:04 crc kubenswrapper[4836]: I0217 14:09:04.923244 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:04 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:04 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:04 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:04 crc kubenswrapper[4836]: I0217 14:09:04.923585 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:05 crc kubenswrapper[4836]: I0217 14:09:05.381445 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c4txt" event={"ID":"8fcd2b6e-6e9f-4bbc-8625-2f21ab4b2c7c","Type":"ContainerStarted","Data":"370bf7639c10502d35b797eb6839b9eb3917522b465c0a4dd7664837b6787193"} Feb 17 14:09:05 crc kubenswrapper[4836]: I0217 14:09:05.421810 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-c4txt" podStartSLOduration=150.421758751 podStartE2EDuration="2m30.421758751s" podCreationTimestamp="2026-02-17 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:09:05.406736112 +0000 UTC m=+171.749664401" watchObservedRunningTime="2026-02-17 14:09:05.421758751 +0000 UTC m=+171.764687040" Feb 17 14:09:05 crc kubenswrapper[4836]: I0217 14:09:05.954910 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:05 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:05 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:05 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:05 crc kubenswrapper[4836]: I0217 14:09:05.954978 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.445185 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.445208 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.445257 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.445325 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.450274 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.450852 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.450895 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.450924 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"92b59bab9fd909d359405ecf217a49ab1de8122281a49768577c5a706060d118"} pod="openshift-console/downloads-7954f5f757-5cbbv" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 17 14:09:06 crc kubenswrapper[4836]: I0217 14:09:06.451036 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" containerID="cri-o://92b59bab9fd909d359405ecf217a49ab1de8122281a49768577c5a706060d118" gracePeriod=2 Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.001162 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:07 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:07 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:07 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.001780 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.497423 4836 generic.go:334] "Generic (PLEG): container finished" podID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerID="92b59bab9fd909d359405ecf217a49ab1de8122281a49768577c5a706060d118" exitCode=0 Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.497515 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5cbbv" event={"ID":"d9eb5c8b-f3c7-4068-82c7-28520f6905c6","Type":"ContainerDied","Data":"92b59bab9fd909d359405ecf217a49ab1de8122281a49768577c5a706060d118"} Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.497559 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5cbbv" event={"ID":"d9eb5c8b-f3c7-4068-82c7-28520f6905c6","Type":"ContainerStarted","Data":"9f7cb281e045dc9dce4b8664374b7c5b4f753c5186831200e7b466bbba132db3"} Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.499315 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.499413 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.499448 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.957882 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:07 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:07 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:07 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:07 crc kubenswrapper[4836]: I0217 14:09:07.958613 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:08 crc kubenswrapper[4836]: I0217 14:09:08.146726 4836 patch_prober.go:28] interesting pod/console-f9d7485db-6zspj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 17 14:09:08 crc kubenswrapper[4836]: I0217 14:09:08.146941 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6zspj" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 17 14:09:08 crc kubenswrapper[4836]: I0217 14:09:08.720906 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:08 crc kubenswrapper[4836]: I0217 14:09:08.721679 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:09 crc kubenswrapper[4836]: I0217 14:09:08.958259 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:09 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:09 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:09 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:09 crc kubenswrapper[4836]: I0217 14:09:08.958529 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:09 crc kubenswrapper[4836]: I0217 14:09:09.921061 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:09 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:09 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:09 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:09 crc kubenswrapper[4836]: I0217 14:09:09.921200 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:10 crc kubenswrapper[4836]: I0217 14:09:10.924045 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:10 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:10 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:10 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:10 crc kubenswrapper[4836]: I0217 14:09:10.924168 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:11 crc kubenswrapper[4836]: I0217 14:09:11.525208 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:09:11 crc kubenswrapper[4836]: I0217 14:09:11.991769 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:11 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:11 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:11 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:11 crc kubenswrapper[4836]: I0217 14:09:11.991840 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:12 crc kubenswrapper[4836]: I0217 14:09:12.998022 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:12 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:12 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:12 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:12 crc kubenswrapper[4836]: I0217 14:09:12.998513 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:13 crc kubenswrapper[4836]: I0217 14:09:13.953407 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:13 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:13 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:13 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:13 crc kubenswrapper[4836]: I0217 14:09:13.953534 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:15 crc kubenswrapper[4836]: I0217 14:09:15.004919 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:15 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:15 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:15 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:15 crc kubenswrapper[4836]: I0217 14:09:15.005718 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:15 crc kubenswrapper[4836]: I0217 14:09:15.921078 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:15 crc kubenswrapper[4836]: [-]has-synced failed: reason withheld Feb 17 14:09:15 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:15 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:15 crc kubenswrapper[4836]: I0217 14:09:15.921157 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:16 crc kubenswrapper[4836]: I0217 14:09:16.425907 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:16 crc kubenswrapper[4836]: I0217 14:09:16.426036 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:16 crc kubenswrapper[4836]: I0217 14:09:16.427269 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:16 crc kubenswrapper[4836]: I0217 14:09:16.427443 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:16 crc kubenswrapper[4836]: I0217 14:09:16.924613 4836 patch_prober.go:28] interesting pod/router-default-5444994796-fqzrl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 14:09:16 crc kubenswrapper[4836]: [+]has-synced ok Feb 17 14:09:16 crc kubenswrapper[4836]: [+]process-running ok Feb 17 14:09:16 crc kubenswrapper[4836]: healthz check failed Feb 17 14:09:16 crc kubenswrapper[4836]: I0217 14:09:16.925160 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fqzrl" podUID="296ae94a-36e6-480b-9395-8f6a96621fdf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:09:17 crc kubenswrapper[4836]: I0217 14:09:17.940938 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:09:17 crc kubenswrapper[4836]: I0217 14:09:17.955824 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blr59" Feb 17 14:09:17 crc kubenswrapper[4836]: I0217 14:09:17.973070 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fqzrl" Feb 17 14:09:18 crc kubenswrapper[4836]: I0217 14:09:18.074470 4836 patch_prober.go:28] interesting pod/console-f9d7485db-6zspj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 17 14:09:18 crc kubenswrapper[4836]: I0217 14:09:18.074536 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6zspj" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 17 14:09:23 crc kubenswrapper[4836]: I0217 14:09:23.771679 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.322282 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 14:09:26 crc kubenswrapper[4836]: E0217 14:09:26.322980 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91eb437c-beea-4f2d-b3f7-505b87fe6dee" containerName="collect-profiles" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.322999 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="91eb437c-beea-4f2d-b3f7-505b87fe6dee" containerName="collect-profiles" Feb 17 14:09:26 crc kubenswrapper[4836]: E0217 14:09:26.323026 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895e5f35-c3c6-46b6-878c-6d9a47b6221f" containerName="pruner" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.323037 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="895e5f35-c3c6-46b6-878c-6d9a47b6221f" containerName="pruner" Feb 17 14:09:26 crc kubenswrapper[4836]: E0217 14:09:26.323057 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caed4fb3-6dd4-4427-880f-fee413854d48" containerName="pruner" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.323068 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="caed4fb3-6dd4-4427-880f-fee413854d48" containerName="pruner" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.323247 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="91eb437c-beea-4f2d-b3f7-505b87fe6dee" containerName="collect-profiles" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.323378 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="caed4fb3-6dd4-4427-880f-fee413854d48" containerName="pruner" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.323394 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="895e5f35-c3c6-46b6-878c-6d9a47b6221f" containerName="pruner" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.324006 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.330005 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.330391 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.339522 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.419366 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.419440 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.436512 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.436593 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.566467 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c99af120-e5bb-45a6-baec-1157b240bda6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c99af120-e5bb-45a6-baec-1157b240bda6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.566545 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c99af120-e5bb-45a6-baec-1157b240bda6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c99af120-e5bb-45a6-baec-1157b240bda6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.667423 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c99af120-e5bb-45a6-baec-1157b240bda6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c99af120-e5bb-45a6-baec-1157b240bda6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.667502 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c99af120-e5bb-45a6-baec-1157b240bda6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c99af120-e5bb-45a6-baec-1157b240bda6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.667553 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c99af120-e5bb-45a6-baec-1157b240bda6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c99af120-e5bb-45a6-baec-1157b240bda6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.702208 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c99af120-e5bb-45a6-baec-1157b240bda6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c99af120-e5bb-45a6-baec-1157b240bda6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:09:26 crc kubenswrapper[4836]: I0217 14:09:26.964657 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:09:28 crc kubenswrapper[4836]: I0217 14:09:28.120537 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:09:28 crc kubenswrapper[4836]: I0217 14:09:28.127940 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:09:29 crc kubenswrapper[4836]: I0217 14:09:29.783902 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:09:29 crc kubenswrapper[4836]: I0217 14:09:29.784271 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:09:31 crc kubenswrapper[4836]: I0217 14:09:31.859221 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 14:09:31 crc kubenswrapper[4836]: I0217 14:09:31.860451 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:31 crc kubenswrapper[4836]: I0217 14:09:31.899995 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.041521 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb43a5d-f735-4891-912a-3ba9e47a4055-kube-api-access\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.042786 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.042837 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-var-lock\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.220191 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb43a5d-f735-4891-912a-3ba9e47a4055-kube-api-access\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.220312 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.220335 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-var-lock\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.220632 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-var-lock\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.221079 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.261983 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb43a5d-f735-4891-912a-3ba9e47a4055-kube-api-access\") pod \"installer-9-crc\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:32 crc kubenswrapper[4836]: I0217 14:09:32.580385 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.417687 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.418912 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.420736 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.420879 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.420971 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.422114 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9f7cb281e045dc9dce4b8664374b7c5b4f753c5186831200e7b466bbba132db3"} pod="openshift-console/downloads-7954f5f757-5cbbv" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.422190 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.422243 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:36 crc kubenswrapper[4836]: I0217 14:09:36.422188 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" containerID="cri-o://9f7cb281e045dc9dce4b8664374b7c5b4f753c5186831200e7b466bbba132db3" gracePeriod=2 Feb 17 14:09:37 crc kubenswrapper[4836]: I0217 14:09:37.148984 4836 generic.go:334] "Generic (PLEG): container finished" podID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerID="9f7cb281e045dc9dce4b8664374b7c5b4f753c5186831200e7b466bbba132db3" exitCode=0 Feb 17 14:09:37 crc kubenswrapper[4836]: I0217 14:09:37.149064 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5cbbv" event={"ID":"d9eb5c8b-f3c7-4068-82c7-28520f6905c6","Type":"ContainerDied","Data":"9f7cb281e045dc9dce4b8664374b7c5b4f753c5186831200e7b466bbba132db3"} Feb 17 14:09:37 crc kubenswrapper[4836]: I0217 14:09:37.149121 4836 scope.go:117] "RemoveContainer" containerID="92b59bab9fd909d359405ecf217a49ab1de8122281a49768577c5a706060d118" Feb 17 14:09:39 crc kubenswrapper[4836]: I0217 14:09:39.342765 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 14:09:39 crc kubenswrapper[4836]: I0217 14:09:39.420016 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 14:09:42 crc kubenswrapper[4836]: W0217 14:09:42.658947 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podffb43a5d_f735_4891_912a_3ba9e47a4055.slice/crio-29d09f7152c586bbd27086739e98c573e0ee0e7fe283e6e465f8d46989dcb7a3 WatchSource:0}: Error finding container 29d09f7152c586bbd27086739e98c573e0ee0e7fe283e6e465f8d46989dcb7a3: Status 404 returned error can't find the container with id 29d09f7152c586bbd27086739e98c573e0ee0e7fe283e6e465f8d46989dcb7a3 Feb 17 14:09:43 crc kubenswrapper[4836]: I0217 14:09:43.210370 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ffb43a5d-f735-4891-912a-3ba9e47a4055","Type":"ContainerStarted","Data":"29d09f7152c586bbd27086739e98c573e0ee0e7fe283e6e465f8d46989dcb7a3"} Feb 17 14:09:46 crc kubenswrapper[4836]: I0217 14:09:46.038118 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6rsds"] Feb 17 14:09:46 crc kubenswrapper[4836]: I0217 14:09:46.238142 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c99af120-e5bb-45a6-baec-1157b240bda6","Type":"ContainerStarted","Data":"405d20c9f1da8228078b48fbb3f4c6d23ca94a32adbe89d4e90ada232dcd9609"} Feb 17 14:09:46 crc kubenswrapper[4836]: I0217 14:09:46.415282 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:46 crc kubenswrapper[4836]: I0217 14:09:46.415719 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:56 crc kubenswrapper[4836]: I0217 14:09:56.415204 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:09:56 crc kubenswrapper[4836]: I0217 14:09:56.416207 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:09:59 crc kubenswrapper[4836]: I0217 14:09:59.764825 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:09:59 crc kubenswrapper[4836]: I0217 14:09:59.765515 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:09:59 crc kubenswrapper[4836]: I0217 14:09:59.765602 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:09:59 crc kubenswrapper[4836]: I0217 14:09:59.766903 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:09:59 crc kubenswrapper[4836]: I0217 14:09:59.767431 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb" gracePeriod=600 Feb 17 14:10:00 crc kubenswrapper[4836]: I0217 14:10:00.562800 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb" exitCode=0 Feb 17 14:10:00 crc kubenswrapper[4836]: I0217 14:10:00.562848 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb"} Feb 17 14:10:02 crc kubenswrapper[4836]: E0217 14:10:02.510326 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 14:10:02 crc kubenswrapper[4836]: E0217 14:10:02.510574 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thgpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-252vj_openshift-marketplace(a172042c-7dc6-4cea-906e-3d9135523f15): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 14:10:02 crc kubenswrapper[4836]: E0217 14:10:02.512583 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-252vj" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.464416 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-252vj" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.558871 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.559282 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnjxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5rfnm_openshift-marketplace(88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.564947 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5rfnm" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.567823 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.568050 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tknx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9w8zr_openshift-marketplace(089d1289-afe9-4ffe-9d96-ac10058335ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.569633 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9w8zr" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.608958 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5rfnm" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" Feb 17 14:10:03 crc kubenswrapper[4836]: E0217 14:10:03.623879 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9w8zr" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.603604 4836 generic.go:334] "Generic (PLEG): container finished" podID="c85860a6-c3bb-448b-b812-cbf38230de01" containerID="8d61046d718ebf03dc13da6194072e3009ef971818f44de3733bfb8ab11c1f92" exitCode=0 Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.603786 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kqqh" event={"ID":"c85860a6-c3bb-448b-b812-cbf38230de01","Type":"ContainerDied","Data":"8d61046d718ebf03dc13da6194072e3009ef971818f44de3733bfb8ab11c1f92"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.606527 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ffb43a5d-f735-4891-912a-3ba9e47a4055","Type":"ContainerStarted","Data":"ce3e52bd320d663e1a8fc906cd936e572a8197efd430dd75fd6c81b3471e6dd4"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.610142 4836 generic.go:334] "Generic (PLEG): container finished" podID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerID="03ed8f2f65fff33093a6776fd604dcab5d3520ae863a96ba61bdb418d4e8293c" exitCode=0 Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.610207 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwhr" event={"ID":"e9f23804-837d-4d3c-94b7-7cdefe6e94df","Type":"ContainerDied","Data":"03ed8f2f65fff33093a6776fd604dcab5d3520ae863a96ba61bdb418d4e8293c"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.619578 4836 generic.go:334] "Generic (PLEG): container finished" podID="c99af120-e5bb-45a6-baec-1157b240bda6" containerID="775f2de68fea42d390b39e7697da438c32c1bcf52a235f512d84d601f8a51746" exitCode=0 Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.619913 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c99af120-e5bb-45a6-baec-1157b240bda6","Type":"ContainerDied","Data":"775f2de68fea42d390b39e7697da438c32c1bcf52a235f512d84d601f8a51746"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.626764 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5cbbv" event={"ID":"d9eb5c8b-f3c7-4068-82c7-28520f6905c6","Type":"ContainerStarted","Data":"783e6f3ec1ddf5eee98e5c4ee5983e27d9e7b8f8f8789635783ab63380e75bcf"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.627068 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.627387 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.627434 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.629456 4836 generic.go:334] "Generic (PLEG): container finished" podID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerID="73b212b6f45054b199f4f919939777dc7461c5c17f33a2a285ccf07966ece193" exitCode=0 Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.629528 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmk55" event={"ID":"f1bd4ed0-3b99-4446-9218-71bb589da4a4","Type":"ContainerDied","Data":"73b212b6f45054b199f4f919939777dc7461c5c17f33a2a285ccf07966ece193"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.636967 4836 generic.go:334] "Generic (PLEG): container finished" podID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerID="eac3b1d23d40a9e3d574bb39b162de6c6b11b16dff12abcfba61b9ba01c21760" exitCode=0 Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.637082 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmpvx" event={"ID":"c6c873c6-ddde-4b9b-9141-e6de9be567d4","Type":"ContainerDied","Data":"eac3b1d23d40a9e3d574bb39b162de6c6b11b16dff12abcfba61b9ba01c21760"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.648193 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"6ca471c2a83c51e21c02e6df84d64c6720d133c689bc0501ece1848cccb37b3b"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.654396 4836 generic.go:334] "Generic (PLEG): container finished" podID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerID="c45d87fc95c2bb97baee74cdf9eb8890199ccbcb1361ab9d40701a3bf1b0aef6" exitCode=0 Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.654449 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfmw4" event={"ID":"8762f2f2-8375-4fdd-8a29-ea2ab598afa1","Type":"ContainerDied","Data":"c45d87fc95c2bb97baee74cdf9eb8890199ccbcb1361ab9d40701a3bf1b0aef6"} Feb 17 14:10:04 crc kubenswrapper[4836]: I0217 14:10:04.685674 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=33.685633334 podStartE2EDuration="33.685633334s" podCreationTimestamp="2026-02-17 14:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:10:04.680245211 +0000 UTC m=+231.023173490" watchObservedRunningTime="2026-02-17 14:10:04.685633334 +0000 UTC m=+231.028561603" Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.664597 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwhr" event={"ID":"e9f23804-837d-4d3c-94b7-7cdefe6e94df","Type":"ContainerStarted","Data":"c2eaa809f67d2bf376430950b6f31e802fbb9ae20ab0242708d65041bbaf3f07"} Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.667667 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfmw4" event={"ID":"8762f2f2-8375-4fdd-8a29-ea2ab598afa1","Type":"ContainerStarted","Data":"63859747d78e0b196aa1ae4f9aecdf579a3667fc25d5a072dbcb78b4447b6dc2"} Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.670141 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kqqh" event={"ID":"c85860a6-c3bb-448b-b812-cbf38230de01","Type":"ContainerStarted","Data":"050ce9294d63c1cb12cc162f10c12c570ecb02593b7c8408dbd3bd6ac92c0e81"} Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.672806 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmk55" event={"ID":"f1bd4ed0-3b99-4446-9218-71bb589da4a4","Type":"ContainerStarted","Data":"6edde19e8fa08b860a5493cc87da492992d83a470155d9fcd528dfc9281f65eb"} Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.675979 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmpvx" event={"ID":"c6c873c6-ddde-4b9b-9141-e6de9be567d4","Type":"ContainerStarted","Data":"a23a8c919a9eea21ab628dc33e93962e83f3bdb7249542d3b4905dd07bca224b"} Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.676939 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.677004 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.703700 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxwhr" podStartSLOduration=5.62098076 podStartE2EDuration="1m18.703663033s" podCreationTimestamp="2026-02-17 14:08:47 +0000 UTC" firstStartedPulling="2026-02-17 14:08:51.98066947 +0000 UTC m=+158.323597739" lastFinishedPulling="2026-02-17 14:10:05.063351743 +0000 UTC m=+231.406280012" observedRunningTime="2026-02-17 14:10:05.697615693 +0000 UTC m=+232.040543962" watchObservedRunningTime="2026-02-17 14:10:05.703663033 +0000 UTC m=+232.046591302" Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.726250 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cmk55" podStartSLOduration=7.557454774 podStartE2EDuration="1m20.726225102s" podCreationTimestamp="2026-02-17 14:08:45 +0000 UTC" firstStartedPulling="2026-02-17 14:08:52.004891994 +0000 UTC m=+158.347820253" lastFinishedPulling="2026-02-17 14:10:05.173662302 +0000 UTC m=+231.516590581" observedRunningTime="2026-02-17 14:10:05.724744163 +0000 UTC m=+232.067672432" watchObservedRunningTime="2026-02-17 14:10:05.726225102 +0000 UTC m=+232.069153371" Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.764373 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5kqqh" podStartSLOduration=5.571011954 podStartE2EDuration="1m18.764340464s" podCreationTimestamp="2026-02-17 14:08:47 +0000 UTC" firstStartedPulling="2026-02-17 14:08:51.926353048 +0000 UTC m=+158.269281327" lastFinishedPulling="2026-02-17 14:10:05.119681578 +0000 UTC m=+231.462609837" observedRunningTime="2026-02-17 14:10:05.759184628 +0000 UTC m=+232.102112917" watchObservedRunningTime="2026-02-17 14:10:05.764340464 +0000 UTC m=+232.107268743" Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.780224 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tmpvx" podStartSLOduration=7.564898172 podStartE2EDuration="1m20.780202396s" podCreationTimestamp="2026-02-17 14:08:45 +0000 UTC" firstStartedPulling="2026-02-17 14:08:51.936706083 +0000 UTC m=+158.279634352" lastFinishedPulling="2026-02-17 14:10:05.152010307 +0000 UTC m=+231.494938576" observedRunningTime="2026-02-17 14:10:05.778527422 +0000 UTC m=+232.121455681" watchObservedRunningTime="2026-02-17 14:10:05.780202396 +0000 UTC m=+232.123130685" Feb 17 14:10:05 crc kubenswrapper[4836]: I0217 14:10:05.806062 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vfmw4" podStartSLOduration=7.577291771 podStartE2EDuration="1m20.806044132s" podCreationTimestamp="2026-02-17 14:08:45 +0000 UTC" firstStartedPulling="2026-02-17 14:08:51.986083604 +0000 UTC m=+158.329011873" lastFinishedPulling="2026-02-17 14:10:05.214835965 +0000 UTC m=+231.557764234" observedRunningTime="2026-02-17 14:10:05.805521828 +0000 UTC m=+232.148450117" watchObservedRunningTime="2026-02-17 14:10:05.806044132 +0000 UTC m=+232.148972401" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.103384 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.266840 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c99af120-e5bb-45a6-baec-1157b240bda6-kube-api-access\") pod \"c99af120-e5bb-45a6-baec-1157b240bda6\" (UID: \"c99af120-e5bb-45a6-baec-1157b240bda6\") " Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.266909 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c99af120-e5bb-45a6-baec-1157b240bda6-kubelet-dir\") pod \"c99af120-e5bb-45a6-baec-1157b240bda6\" (UID: \"c99af120-e5bb-45a6-baec-1157b240bda6\") " Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.267101 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c99af120-e5bb-45a6-baec-1157b240bda6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c99af120-e5bb-45a6-baec-1157b240bda6" (UID: "c99af120-e5bb-45a6-baec-1157b240bda6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.267397 4836 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c99af120-e5bb-45a6-baec-1157b240bda6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.276490 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99af120-e5bb-45a6-baec-1157b240bda6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c99af120-e5bb-45a6-baec-1157b240bda6" (UID: "c99af120-e5bb-45a6-baec-1157b240bda6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.368884 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.369034 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.369449 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c99af120-e5bb-45a6-baec-1157b240bda6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.415630 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.415687 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.415718 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.415759 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.681928 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.681952 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c99af120-e5bb-45a6-baec-1157b240bda6","Type":"ContainerDied","Data":"405d20c9f1da8228078b48fbb3f4c6d23ca94a32adbe89d4e90ada232dcd9609"} Feb 17 14:10:06 crc kubenswrapper[4836]: I0217 14:10:06.683706 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405d20c9f1da8228078b48fbb3f4c6d23ca94a32adbe89d4e90ada232dcd9609" Feb 17 14:10:07 crc kubenswrapper[4836]: I0217 14:10:07.042289 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:10:07 crc kubenswrapper[4836]: I0217 14:10:07.042376 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:10:07 crc kubenswrapper[4836]: I0217 14:10:07.190453 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:10:07 crc kubenswrapper[4836]: I0217 14:10:07.190702 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:10:07 crc kubenswrapper[4836]: I0217 14:10:07.678654 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tmpvx" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="registry-server" probeResult="failure" output=< Feb 17 14:10:07 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:10:07 crc kubenswrapper[4836]: > Feb 17 14:10:07 crc kubenswrapper[4836]: I0217 14:10:07.869906 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:10:07 crc kubenswrapper[4836]: I0217 14:10:07.869990 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:10:08 crc kubenswrapper[4836]: I0217 14:10:08.091080 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vfmw4" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="registry-server" probeResult="failure" output=< Feb 17 14:10:08 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:10:08 crc kubenswrapper[4836]: > Feb 17 14:10:08 crc kubenswrapper[4836]: I0217 14:10:08.144764 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:10:08 crc kubenswrapper[4836]: I0217 14:10:08.144841 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:10:08 crc kubenswrapper[4836]: I0217 14:10:08.429035 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cmk55" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="registry-server" probeResult="failure" output=< Feb 17 14:10:08 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:10:08 crc kubenswrapper[4836]: > Feb 17 14:10:08 crc kubenswrapper[4836]: I0217 14:10:08.924766 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-pxwhr" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="registry-server" probeResult="failure" output=< Feb 17 14:10:08 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:10:08 crc kubenswrapper[4836]: > Feb 17 14:10:09 crc kubenswrapper[4836]: I0217 14:10:09.195485 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5kqqh" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="registry-server" probeResult="failure" output=< Feb 17 14:10:09 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:10:09 crc kubenswrapper[4836]: > Feb 17 14:10:11 crc kubenswrapper[4836]: I0217 14:10:11.066690 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" podUID="c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" containerName="oauth-openshift" containerID="cri-o://374ae013639e0d9afa8e234c5feaec0812c5a1b8b7085c57cb72bf432395a8d0" gracePeriod=15 Feb 17 14:10:11 crc kubenswrapper[4836]: I0217 14:10:11.726913 4836 generic.go:334] "Generic (PLEG): container finished" podID="c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" containerID="374ae013639e0d9afa8e234c5feaec0812c5a1b8b7085c57cb72bf432395a8d0" exitCode=0 Feb 17 14:10:11 crc kubenswrapper[4836]: I0217 14:10:11.726977 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" event={"ID":"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7","Type":"ContainerDied","Data":"374ae013639e0d9afa8e234c5feaec0812c5a1b8b7085c57cb72bf432395a8d0"} Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.545113 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598438 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-provider-selection\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598512 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-ocp-branding-template\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598566 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-router-certs\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598605 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6htjx\" (UniqueName: \"kubernetes.io/projected/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-kube-api-access-6htjx\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598684 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-login\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598739 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-error\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598780 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-cliconfig\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598823 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-trusted-ca-bundle\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598897 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-idp-0-file-data\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.598977 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-session\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.599035 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-service-ca\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.599068 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-policies\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.599111 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-serving-cert\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.599151 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-dir\") pod \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\" (UID: \"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7\") " Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.600545 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.600575 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.601115 4836 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.601111 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-574dcf5686-dttcs"] Feb 17 14:10:12 crc kubenswrapper[4836]: E0217 14:10:12.602152 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99af120-e5bb-45a6-baec-1157b240bda6" containerName="pruner" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.602182 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99af120-e5bb-45a6-baec-1157b240bda6" containerName="pruner" Feb 17 14:10:12 crc kubenswrapper[4836]: E0217 14:10:12.602256 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" containerName="oauth-openshift" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.602267 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" containerName="oauth-openshift" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.602624 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99af120-e5bb-45a6-baec-1157b240bda6" containerName="pruner" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.602662 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" containerName="oauth-openshift" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.605766 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.601173 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.601162 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.601227 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.602151 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.615738 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574dcf5686-dttcs"] Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.621724 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.622305 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.625668 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.627164 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.627700 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.628835 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-kube-api-access-6htjx" (OuterVolumeSpecName: "kube-api-access-6htjx") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "kube-api-access-6htjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.637149 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.637598 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.640262 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" (UID: "c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708359 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-error\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708423 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708446 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708470 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-session\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708497 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708544 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45204263-0159-4c86-b81a-a900db07b14f-audit-dir\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708565 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-audit-policies\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708631 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708723 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdct7\" (UniqueName: \"kubernetes.io/projected/45204263-0159-4c86-b81a-a900db07b14f-kube-api-access-wdct7\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708788 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708806 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-service-ca\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708831 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708848 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-login\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708864 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-router-certs\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708971 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.708988 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709002 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709014 4836 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709026 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709036 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709045 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709055 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6htjx\" (UniqueName: \"kubernetes.io/projected/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-kube-api-access-6htjx\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709063 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709073 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709082 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.709093 4836 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.733141 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" event={"ID":"c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7","Type":"ContainerDied","Data":"f9b98c0ff2091be32d114061b6cc2daa5338c6afb2d30dae1e18fe2afc9b3ea3"} Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.733202 4836 scope.go:117] "RemoveContainer" containerID="374ae013639e0d9afa8e234c5feaec0812c5a1b8b7085c57cb72bf432395a8d0" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.733235 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6rsds" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.768192 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6rsds"] Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.771677 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6rsds"] Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811363 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811478 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdct7\" (UniqueName: \"kubernetes.io/projected/45204263-0159-4c86-b81a-a900db07b14f-kube-api-access-wdct7\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811540 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811570 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-service-ca\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811624 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811693 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-login\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811722 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-router-certs\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811796 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-error\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811824 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811878 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811904 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-session\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.811971 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.812025 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45204263-0159-4c86-b81a-a900db07b14f-audit-dir\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.812061 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-audit-policies\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.813160 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-audit-policies\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.813776 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.813955 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45204263-0159-4c86-b81a-a900db07b14f-audit-dir\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.815243 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.815404 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-error\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.815873 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-service-ca\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.818258 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-router-certs\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.818402 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.818545 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-login\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.819496 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.819891 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-session\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.822046 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.823788 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45204263-0159-4c86-b81a-a900db07b14f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.827358 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdct7\" (UniqueName: \"kubernetes.io/projected/45204263-0159-4c86-b81a-a900db07b14f-kube-api-access-wdct7\") pod \"oauth-openshift-574dcf5686-dttcs\" (UID: \"45204263-0159-4c86-b81a-a900db07b14f\") " pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:12 crc kubenswrapper[4836]: I0217 14:10:12.972344 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:14 crc kubenswrapper[4836]: I0217 14:10:14.075123 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574dcf5686-dttcs"] Feb 17 14:10:14 crc kubenswrapper[4836]: I0217 14:10:14.580128 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7" path="/var/lib/kubelet/pods/c2d8fb42-9c68-4eb3-a8c9-4e4a98772ae7/volumes" Feb 17 14:10:14 crc kubenswrapper[4836]: I0217 14:10:14.801729 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" event={"ID":"45204263-0159-4c86-b81a-a900db07b14f","Type":"ContainerStarted","Data":"45d78833c75355ecb9992b443e268d8aac688a767fe2b22a8848b9f4142aa91f"} Feb 17 14:10:14 crc kubenswrapper[4836]: I0217 14:10:14.801838 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" event={"ID":"45204263-0159-4c86-b81a-a900db07b14f","Type":"ContainerStarted","Data":"f076f81bf29321a523aa5373a441006ff695ed5c2c0f577c0822b4a5c8173c20"} Feb 17 14:10:14 crc kubenswrapper[4836]: I0217 14:10:14.805824 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:14 crc kubenswrapper[4836]: I0217 14:10:14.809729 4836 patch_prober.go:28] interesting pod/oauth-openshift-574dcf5686-dttcs container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" start-of-body= Feb 17 14:10:14 crc kubenswrapper[4836]: I0217 14:10:14.809800 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" podUID="45204263-0159-4c86-b81a-a900db07b14f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" Feb 17 14:10:14 crc kubenswrapper[4836]: I0217 14:10:14.830956 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" podStartSLOduration=28.830922267 podStartE2EDuration="28.830922267s" podCreationTimestamp="2026-02-17 14:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:10:14.828043121 +0000 UTC m=+241.170971400" watchObservedRunningTime="2026-02-17 14:10:14.830922267 +0000 UTC m=+241.173850536" Feb 17 14:10:15 crc kubenswrapper[4836]: I0217 14:10:15.818288 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-574dcf5686-dttcs" Feb 17 14:10:16 crc kubenswrapper[4836]: I0217 14:10:16.425317 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:10:16 crc kubenswrapper[4836]: I0217 14:10:16.425701 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:10:16 crc kubenswrapper[4836]: I0217 14:10:16.426062 4836 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cbbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 17 14:10:16 crc kubenswrapper[4836]: I0217 14:10:16.426082 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cbbv" podUID="d9eb5c8b-f3c7-4068-82c7-28520f6905c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 17 14:10:16 crc kubenswrapper[4836]: I0217 14:10:16.478631 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:10:16 crc kubenswrapper[4836]: I0217 14:10:16.527318 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:10:16 crc kubenswrapper[4836]: I0217 14:10:16.821620 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-252vj" event={"ID":"a172042c-7dc6-4cea-906e-3d9135523f15","Type":"ContainerStarted","Data":"5fe13927481d2948ed6f845b9678013bbf8fcbf061f7116c0ec82c5abd9ee696"} Feb 17 14:10:17 crc kubenswrapper[4836]: I0217 14:10:17.218878 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:10:17 crc kubenswrapper[4836]: I0217 14:10:17.265844 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:10:17 crc kubenswrapper[4836]: I0217 14:10:17.391790 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:10:17 crc kubenswrapper[4836]: I0217 14:10:17.391885 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:10:18 crc kubenswrapper[4836]: I0217 14:10:18.070646 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmk55"] Feb 17 14:10:18 crc kubenswrapper[4836]: I0217 14:10:18.264491 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:10:18 crc kubenswrapper[4836]: I0217 14:10:18.581273 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:10:18 crc kubenswrapper[4836]: I0217 14:10:18.638120 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:10:18 crc kubenswrapper[4836]: I0217 14:10:18.643935 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:10:18 crc kubenswrapper[4836]: I0217 14:10:18.878962 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rfnm" event={"ID":"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff","Type":"ContainerStarted","Data":"c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c"} Feb 17 14:10:18 crc kubenswrapper[4836]: I0217 14:10:18.885307 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8zr" event={"ID":"089d1289-afe9-4ffe-9d96-ac10058335ed","Type":"ContainerStarted","Data":"12b9c51f4d9306ca0c2b4adb55d1695962298f8f615d1a514d7884045bb5aea1"} Feb 17 14:10:18 crc kubenswrapper[4836]: I0217 14:10:18.885987 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cmk55" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="registry-server" containerID="cri-o://6edde19e8fa08b860a5493cc87da492992d83a470155d9fcd528dfc9281f65eb" gracePeriod=2 Feb 17 14:10:19 crc kubenswrapper[4836]: I0217 14:10:19.880026 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmpvx"] Feb 17 14:10:19 crc kubenswrapper[4836]: I0217 14:10:19.880692 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tmpvx" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="registry-server" containerID="cri-o://a23a8c919a9eea21ab628dc33e93962e83f3bdb7249542d3b4905dd07bca224b" gracePeriod=2 Feb 17 14:10:19 crc kubenswrapper[4836]: I0217 14:10:19.906143 4836 generic.go:334] "Generic (PLEG): container finished" podID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerID="6edde19e8fa08b860a5493cc87da492992d83a470155d9fcd528dfc9281f65eb" exitCode=0 Feb 17 14:10:19 crc kubenswrapper[4836]: I0217 14:10:19.906199 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmk55" event={"ID":"f1bd4ed0-3b99-4446-9218-71bb589da4a4","Type":"ContainerDied","Data":"6edde19e8fa08b860a5493cc87da492992d83a470155d9fcd528dfc9281f65eb"} Feb 17 14:10:20 crc kubenswrapper[4836]: I0217 14:10:20.930141 4836 generic.go:334] "Generic (PLEG): container finished" podID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerID="a23a8c919a9eea21ab628dc33e93962e83f3bdb7249542d3b4905dd07bca224b" exitCode=0 Feb 17 14:10:20 crc kubenswrapper[4836]: I0217 14:10:20.930274 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmpvx" event={"ID":"c6c873c6-ddde-4b9b-9141-e6de9be567d4","Type":"ContainerDied","Data":"a23a8c919a9eea21ab628dc33e93962e83f3bdb7249542d3b4905dd07bca224b"} Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.230144 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.421251 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-utilities\") pod \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.421417 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmwlv\" (UniqueName: \"kubernetes.io/projected/f1bd4ed0-3b99-4446-9218-71bb589da4a4-kube-api-access-fmwlv\") pod \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.421470 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-catalog-content\") pod \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\" (UID: \"f1bd4ed0-3b99-4446-9218-71bb589da4a4\") " Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.540390 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1bd4ed0-3b99-4446-9218-71bb589da4a4" (UID: "f1bd4ed0-3b99-4446-9218-71bb589da4a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.541707 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.543175 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-utilities" (OuterVolumeSpecName: "utilities") pod "f1bd4ed0-3b99-4446-9218-71bb589da4a4" (UID: "f1bd4ed0-3b99-4446-9218-71bb589da4a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.637913 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1bd4ed0-3b99-4446-9218-71bb589da4a4-kube-api-access-fmwlv" (OuterVolumeSpecName: "kube-api-access-fmwlv") pod "f1bd4ed0-3b99-4446-9218-71bb589da4a4" (UID: "f1bd4ed0-3b99-4446-9218-71bb589da4a4"). InnerVolumeSpecName "kube-api-access-fmwlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.642515 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bd4ed0-3b99-4446-9218-71bb589da4a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.642544 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmwlv\" (UniqueName: \"kubernetes.io/projected/f1bd4ed0-3b99-4446-9218-71bb589da4a4-kube-api-access-fmwlv\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.980983 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmk55" event={"ID":"f1bd4ed0-3b99-4446-9218-71bb589da4a4","Type":"ContainerDied","Data":"6e75d917f9b18c07b2feade7d6ceab556bb6226e0a78e8a3d47b72928e406bad"} Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.981585 4836 scope.go:117] "RemoveContainer" containerID="6edde19e8fa08b860a5493cc87da492992d83a470155d9fcd528dfc9281f65eb" Feb 17 14:10:21 crc kubenswrapper[4836]: I0217 14:10:21.981785 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmk55" Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.016458 4836 generic.go:334] "Generic (PLEG): container finished" podID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerID="12b9c51f4d9306ca0c2b4adb55d1695962298f8f615d1a514d7884045bb5aea1" exitCode=0 Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.016529 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8zr" event={"ID":"089d1289-afe9-4ffe-9d96-ac10058335ed","Type":"ContainerDied","Data":"12b9c51f4d9306ca0c2b4adb55d1695962298f8f615d1a514d7884045bb5aea1"} Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.260429 4836 scope.go:117] "RemoveContainer" containerID="73b212b6f45054b199f4f919939777dc7461c5c17f33a2a285ccf07966ece193" Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.262316 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kqqh"] Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.262649 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5kqqh" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="registry-server" containerID="cri-o://050ce9294d63c1cb12cc162f10c12c570ecb02593b7c8408dbd3bd6ac92c0e81" gracePeriod=2 Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.270579 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmk55"] Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.273339 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cmk55"] Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.642790 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" path="/var/lib/kubelet/pods/f1bd4ed0-3b99-4446-9218-71bb589da4a4/volumes" Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.987529 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:10:22 crc kubenswrapper[4836]: I0217 14:10:22.995675 4836 scope.go:117] "RemoveContainer" containerID="9679e4b4c4f0f644eb56ce9a9ac7ad7178d79f35bc0d94642d6b2ded1809a114" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.032016 4836 generic.go:334] "Generic (PLEG): container finished" podID="c85860a6-c3bb-448b-b812-cbf38230de01" containerID="050ce9294d63c1cb12cc162f10c12c570ecb02593b7c8408dbd3bd6ac92c0e81" exitCode=0 Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.032094 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kqqh" event={"ID":"c85860a6-c3bb-448b-b812-cbf38230de01","Type":"ContainerDied","Data":"050ce9294d63c1cb12cc162f10c12c570ecb02593b7c8408dbd3bd6ac92c0e81"} Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.045917 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmpvx" event={"ID":"c6c873c6-ddde-4b9b-9141-e6de9be567d4","Type":"ContainerDied","Data":"79e0157c4fae70c4a163e7552bd45039fe6e084cf3fa63db4fbd428401695df6"} Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.045993 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmpvx" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.045991 4836 scope.go:117] "RemoveContainer" containerID="a23a8c919a9eea21ab628dc33e93962e83f3bdb7249542d3b4905dd07bca224b" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.065023 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fk4b\" (UniqueName: \"kubernetes.io/projected/c6c873c6-ddde-4b9b-9141-e6de9be567d4-kube-api-access-7fk4b\") pod \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.065136 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-catalog-content\") pod \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.065272 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-utilities\") pod \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\" (UID: \"c6c873c6-ddde-4b9b-9141-e6de9be567d4\") " Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.067010 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-utilities" (OuterVolumeSpecName: "utilities") pod "c6c873c6-ddde-4b9b-9141-e6de9be567d4" (UID: "c6c873c6-ddde-4b9b-9141-e6de9be567d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.104788 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c873c6-ddde-4b9b-9141-e6de9be567d4-kube-api-access-7fk4b" (OuterVolumeSpecName: "kube-api-access-7fk4b") pod "c6c873c6-ddde-4b9b-9141-e6de9be567d4" (UID: "c6c873c6-ddde-4b9b-9141-e6de9be567d4"). InnerVolumeSpecName "kube-api-access-7fk4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.125654 4836 scope.go:117] "RemoveContainer" containerID="eac3b1d23d40a9e3d574bb39b162de6c6b11b16dff12abcfba61b9ba01c21760" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.151731 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6c873c6-ddde-4b9b-9141-e6de9be567d4" (UID: "c6c873c6-ddde-4b9b-9141-e6de9be567d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.167064 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.167108 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fk4b\" (UniqueName: \"kubernetes.io/projected/c6c873c6-ddde-4b9b-9141-e6de9be567d4-kube-api-access-7fk4b\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.167123 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6c873c6-ddde-4b9b-9141-e6de9be567d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.274638 4836 scope.go:117] "RemoveContainer" containerID="8aece0956593a85800757e782bdc3eb1d3d87f1ac99e3fc8ce9f7012a48be219" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.365653 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.400489 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmpvx"] Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.401693 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tmpvx"] Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.470695 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8k6d\" (UniqueName: \"kubernetes.io/projected/c85860a6-c3bb-448b-b812-cbf38230de01-kube-api-access-t8k6d\") pod \"c85860a6-c3bb-448b-b812-cbf38230de01\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.470816 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-utilities\") pod \"c85860a6-c3bb-448b-b812-cbf38230de01\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.470960 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-catalog-content\") pod \"c85860a6-c3bb-448b-b812-cbf38230de01\" (UID: \"c85860a6-c3bb-448b-b812-cbf38230de01\") " Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.472381 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-utilities" (OuterVolumeSpecName: "utilities") pod "c85860a6-c3bb-448b-b812-cbf38230de01" (UID: "c85860a6-c3bb-448b-b812-cbf38230de01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.475252 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85860a6-c3bb-448b-b812-cbf38230de01-kube-api-access-t8k6d" (OuterVolumeSpecName: "kube-api-access-t8k6d") pod "c85860a6-c3bb-448b-b812-cbf38230de01" (UID: "c85860a6-c3bb-448b-b812-cbf38230de01"). InnerVolumeSpecName "kube-api-access-t8k6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.498605 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c85860a6-c3bb-448b-b812-cbf38230de01" (UID: "c85860a6-c3bb-448b-b812-cbf38230de01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.572011 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8k6d\" (UniqueName: \"kubernetes.io/projected/c85860a6-c3bb-448b-b812-cbf38230de01-kube-api-access-t8k6d\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.572068 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:23 crc kubenswrapper[4836]: I0217 14:10:23.572079 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85860a6-c3bb-448b-b812-cbf38230de01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.056587 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8zr" event={"ID":"089d1289-afe9-4ffe-9d96-ac10058335ed","Type":"ContainerStarted","Data":"56a4ac051fd52f2fd8e193686dffb745df251c7f892fec72d600a2fa80ecbd34"} Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.058326 4836 generic.go:334] "Generic (PLEG): container finished" podID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerID="c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c" exitCode=0 Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.058402 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rfnm" event={"ID":"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff","Type":"ContainerDied","Data":"c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c"} Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.067583 4836 generic.go:334] "Generic (PLEG): container finished" podID="a172042c-7dc6-4cea-906e-3d9135523f15" containerID="5fe13927481d2948ed6f845b9678013bbf8fcbf061f7116c0ec82c5abd9ee696" exitCode=0 Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.067680 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-252vj" event={"ID":"a172042c-7dc6-4cea-906e-3d9135523f15","Type":"ContainerDied","Data":"5fe13927481d2948ed6f845b9678013bbf8fcbf061f7116c0ec82c5abd9ee696"} Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.072093 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kqqh" event={"ID":"c85860a6-c3bb-448b-b812-cbf38230de01","Type":"ContainerDied","Data":"f9efa614ea777c6c1f7f2234c739bb0e406ce4096c5477be16d8aba1cfb4c85e"} Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.072152 4836 scope.go:117] "RemoveContainer" containerID="050ce9294d63c1cb12cc162f10c12c570ecb02593b7c8408dbd3bd6ac92c0e81" Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.072191 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kqqh" Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.079430 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9w8zr" podStartSLOduration=8.918052428 podStartE2EDuration="1m40.079411311s" podCreationTimestamp="2026-02-17 14:08:44 +0000 UTC" firstStartedPulling="2026-02-17 14:08:51.932978604 +0000 UTC m=+158.275906873" lastFinishedPulling="2026-02-17 14:10:23.094337487 +0000 UTC m=+249.437265756" observedRunningTime="2026-02-17 14:10:24.078075756 +0000 UTC m=+250.421004115" watchObservedRunningTime="2026-02-17 14:10:24.079411311 +0000 UTC m=+250.422339600" Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.094348 4836 scope.go:117] "RemoveContainer" containerID="8d61046d718ebf03dc13da6194072e3009ef971818f44de3733bfb8ab11c1f92" Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.112874 4836 scope.go:117] "RemoveContainer" containerID="d36870560f8d1243c818dca57cf74dea7a07e8c43795bb396db32ccfc2a302b6" Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.165159 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kqqh"] Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.167902 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kqqh"] Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.575258 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" path="/var/lib/kubelet/pods/c6c873c6-ddde-4b9b-9141-e6de9be567d4/volumes" Feb 17 14:10:24 crc kubenswrapper[4836]: I0217 14:10:24.575989 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" path="/var/lib/kubelet/pods/c85860a6-c3bb-448b-b812-cbf38230de01/volumes" Feb 17 14:10:25 crc kubenswrapper[4836]: I0217 14:10:25.082136 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rfnm" event={"ID":"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff","Type":"ContainerStarted","Data":"cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375"} Feb 17 14:10:25 crc kubenswrapper[4836]: I0217 14:10:25.083793 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-252vj" event={"ID":"a172042c-7dc6-4cea-906e-3d9135523f15","Type":"ContainerStarted","Data":"6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa"} Feb 17 14:10:25 crc kubenswrapper[4836]: I0217 14:10:25.118878 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5rfnm" podStartSLOduration=4.357341982 podStartE2EDuration="1m37.118848259s" podCreationTimestamp="2026-02-17 14:08:48 +0000 UTC" firstStartedPulling="2026-02-17 14:08:51.976065278 +0000 UTC m=+158.318993547" lastFinishedPulling="2026-02-17 14:10:24.737571555 +0000 UTC m=+251.080499824" observedRunningTime="2026-02-17 14:10:25.115795308 +0000 UTC m=+251.458723597" watchObservedRunningTime="2026-02-17 14:10:25.118848259 +0000 UTC m=+251.461776528" Feb 17 14:10:25 crc kubenswrapper[4836]: I0217 14:10:25.138895 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-252vj" podStartSLOduration=4.503022009 podStartE2EDuration="1m37.138843849s" podCreationTimestamp="2026-02-17 14:08:48 +0000 UTC" firstStartedPulling="2026-02-17 14:08:51.99005929 +0000 UTC m=+158.332987559" lastFinishedPulling="2026-02-17 14:10:24.62588112 +0000 UTC m=+250.968809399" observedRunningTime="2026-02-17 14:10:25.134018451 +0000 UTC m=+251.476946740" watchObservedRunningTime="2026-02-17 14:10:25.138843849 +0000 UTC m=+251.481772118" Feb 17 14:10:25 crc kubenswrapper[4836]: I0217 14:10:25.665887 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:10:25 crc kubenswrapper[4836]: I0217 14:10:25.665939 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:10:26 crc kubenswrapper[4836]: I0217 14:10:26.431247 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5cbbv" Feb 17 14:10:26 crc kubenswrapper[4836]: I0217 14:10:26.721629 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9w8zr" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="registry-server" probeResult="failure" output=< Feb 17 14:10:26 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:10:26 crc kubenswrapper[4836]: > Feb 17 14:10:28 crc kubenswrapper[4836]: I0217 14:10:28.574166 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:10:28 crc kubenswrapper[4836]: I0217 14:10:28.606832 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:10:29 crc kubenswrapper[4836]: I0217 14:10:29.482993 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:10:29 crc kubenswrapper[4836]: I0217 14:10:29.483466 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:10:29 crc kubenswrapper[4836]: I0217 14:10:29.628174 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-252vj" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="registry-server" probeResult="failure" output=< Feb 17 14:10:29 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:10:29 crc kubenswrapper[4836]: > Feb 17 14:10:30 crc kubenswrapper[4836]: I0217 14:10:30.541090 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5rfnm" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="registry-server" probeResult="failure" output=< Feb 17 14:10:30 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:10:30 crc kubenswrapper[4836]: > Feb 17 14:10:35 crc kubenswrapper[4836]: I0217 14:10:35.666595 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:10:35 crc kubenswrapper[4836]: I0217 14:10:35.710071 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:10:38 crc kubenswrapper[4836]: I0217 14:10:38.623039 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:10:38 crc kubenswrapper[4836]: I0217 14:10:38.673568 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:10:39 crc kubenswrapper[4836]: I0217 14:10:39.530950 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:10:39 crc kubenswrapper[4836]: I0217 14:10:39.569911 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:10:40 crc kubenswrapper[4836]: I0217 14:10:40.210925 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5rfnm"] Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.229654 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5rfnm" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="registry-server" containerID="cri-o://cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375" gracePeriod=2 Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.597879 4836 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.598950 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.598972 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.599011 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="extract-content" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599020 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="extract-content" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.599029 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="extract-utilities" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599038 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="extract-utilities" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.599057 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="extract-utilities" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599082 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="extract-utilities" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.599098 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599105 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.599122 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="extract-content" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599129 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="extract-content" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.599163 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="extract-content" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599170 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="extract-content" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.599181 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="extract-utilities" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599189 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="extract-utilities" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.599199 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599205 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599364 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85860a6-c3bb-448b-b812-cbf38230de01" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599455 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c873c6-ddde-4b9b-9141-e6de9be567d4" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.599469 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1bd4ed0-3b99-4446-9218-71bb589da4a4" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.600381 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.638287 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.663820 4836 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.664230 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71" gracePeriod=15 Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.664403 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0" gracePeriod=15 Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.664393 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3" gracePeriod=15 Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.664566 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b" gracePeriod=15 Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.664538 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c" gracePeriod=15 Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.667596 4836 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.667877 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.667903 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.667917 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="extract-utilities" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.667925 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="extract-utilities" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.667943 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668331 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.668352 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668361 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.668371 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668377 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.668386 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668392 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.668399 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668408 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.668419 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="extract-content" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668425 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="extract-content" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.668433 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668439 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.668447 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668453 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668554 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668577 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668585 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668592 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668601 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668611 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668620 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerName="registry-server" Feb 17 14:10:41 crc kubenswrapper[4836]: E0217 14:10:41.668745 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668756 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.668888 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.844899 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.852775 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-utilities\") pod \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.852905 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnjxf\" (UniqueName: \"kubernetes.io/projected/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-kube-api-access-jnjxf\") pod \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853020 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-catalog-content\") pod \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\" (UID: \"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff\") " Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853306 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853340 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853431 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853499 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853617 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853647 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853680 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.853695 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.856786 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-utilities" (OuterVolumeSpecName: "utilities") pod "88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" (UID: "88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.905956 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-kube-api-access-jnjxf" (OuterVolumeSpecName: "kube-api-access-jnjxf") pod "88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" (UID: "88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff"). InnerVolumeSpecName "kube-api-access-jnjxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.955749 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.955815 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.955856 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.955892 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.955936 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.955957 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.955982 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.955998 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956067 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956084 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnjxf\" (UniqueName: \"kubernetes.io/projected/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-kube-api-access-jnjxf\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956143 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956192 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956216 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956240 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956268 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956314 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956336 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.956360 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:41 crc kubenswrapper[4836]: I0217 14:10:41.994472 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" (UID: "88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.057664 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.143801 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:10:42 crc kubenswrapper[4836]: E0217 14:10:42.171877 4836 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18950e04c9e65df6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 14:10:42.170134006 +0000 UTC m=+268.513062275,LastTimestamp:2026-02-17 14:10:42.170134006 +0000 UTC m=+268.513062275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.244143 4836 generic.go:334] "Generic (PLEG): container finished" podID="ffb43a5d-f735-4891-912a-3ba9e47a4055" containerID="ce3e52bd320d663e1a8fc906cd936e572a8197efd430dd75fd6c81b3471e6dd4" exitCode=0 Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.244248 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ffb43a5d-f735-4891-912a-3ba9e47a4055","Type":"ContainerDied","Data":"ce3e52bd320d663e1a8fc906cd936e572a8197efd430dd75fd6c81b3471e6dd4"} Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.245520 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.246076 4836 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.246307 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.247218 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"17a42e4ab2b42f2910a34e8c55afe5dfb679b723c809ad5f44ffa7b713039e7e"} Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.252316 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.254384 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.255688 4836 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c" exitCode=0 Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.255716 4836 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b" exitCode=0 Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.255729 4836 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3" exitCode=0 Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.255738 4836 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0" exitCode=2 Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.255825 4836 scope.go:117] "RemoveContainer" containerID="a13a546a929cc67fa3a708f23186f65c3dbc28afd67a421a66c732b8308e4177" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.261595 4836 generic.go:334] "Generic (PLEG): container finished" podID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" containerID="cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375" exitCode=0 Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.261671 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rfnm" event={"ID":"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff","Type":"ContainerDied","Data":"cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375"} Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.261700 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rfnm" event={"ID":"88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff","Type":"ContainerDied","Data":"f4532e92cda0f4cb49095bb6a57a64c5099b9a9601e8744de1408686bb81a1cb"} Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.261740 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rfnm" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.262684 4836 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.263226 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.263958 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.264181 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.279550 4836 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.280032 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.280333 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.280619 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.300716 4836 scope.go:117] "RemoveContainer" containerID="cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.328439 4836 scope.go:117] "RemoveContainer" containerID="c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.346241 4836 scope.go:117] "RemoveContainer" containerID="6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.374663 4836 scope.go:117] "RemoveContainer" containerID="cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375" Feb 17 14:10:42 crc kubenswrapper[4836]: E0217 14:10:42.376114 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375\": container with ID starting with cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375 not found: ID does not exist" containerID="cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.376166 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375"} err="failed to get container status \"cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375\": rpc error: code = NotFound desc = could not find container \"cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375\": container with ID starting with cde4d60e0777293bf4b491417a101ff54d8c34c827d15b3c2d4b38007512d375 not found: ID does not exist" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.376200 4836 scope.go:117] "RemoveContainer" containerID="c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c" Feb 17 14:10:42 crc kubenswrapper[4836]: E0217 14:10:42.376614 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c\": container with ID starting with c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c not found: ID does not exist" containerID="c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.376667 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c"} err="failed to get container status \"c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c\": rpc error: code = NotFound desc = could not find container \"c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c\": container with ID starting with c3a9e8665fc73a09585fc2a34fa81b5e1b737549477832ab2c07840098e0bf3c not found: ID does not exist" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.376709 4836 scope.go:117] "RemoveContainer" containerID="6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8" Feb 17 14:10:42 crc kubenswrapper[4836]: E0217 14:10:42.377104 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8\": container with ID starting with 6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8 not found: ID does not exist" containerID="6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8" Feb 17 14:10:42 crc kubenswrapper[4836]: I0217 14:10:42.377152 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8"} err="failed to get container status \"6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8\": rpc error: code = NotFound desc = could not find container \"6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8\": container with ID starting with 6bb0c69e3bd51a902da87668ec1687c119230884a9fa4dff71d17ec28d9300e8 not found: ID does not exist" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.272291 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.277528 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e"} Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.279516 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.279892 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.280101 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.516307 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.516903 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.517205 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.517585 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.577437 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-kubelet-dir\") pod \"ffb43a5d-f735-4891-912a-3ba9e47a4055\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.577521 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-var-lock\") pod \"ffb43a5d-f735-4891-912a-3ba9e47a4055\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.577587 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb43a5d-f735-4891-912a-3ba9e47a4055-kube-api-access\") pod \"ffb43a5d-f735-4891-912a-3ba9e47a4055\" (UID: \"ffb43a5d-f735-4891-912a-3ba9e47a4055\") " Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.577586 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ffb43a5d-f735-4891-912a-3ba9e47a4055" (UID: "ffb43a5d-f735-4891-912a-3ba9e47a4055"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.577658 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-var-lock" (OuterVolumeSpecName: "var-lock") pod "ffb43a5d-f735-4891-912a-3ba9e47a4055" (UID: "ffb43a5d-f735-4891-912a-3ba9e47a4055"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.577833 4836 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.577848 4836 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb43a5d-f735-4891-912a-3ba9e47a4055-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.583348 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb43a5d-f735-4891-912a-3ba9e47a4055-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ffb43a5d-f735-4891-912a-3ba9e47a4055" (UID: "ffb43a5d-f735-4891-912a-3ba9e47a4055"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:43 crc kubenswrapper[4836]: I0217 14:10:43.678816 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb43a5d-f735-4891-912a-3ba9e47a4055-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.285338 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ffb43a5d-f735-4891-912a-3ba9e47a4055","Type":"ContainerDied","Data":"29d09f7152c586bbd27086739e98c573e0ee0e7fe283e6e465f8d46989dcb7a3"} Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.286107 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29d09f7152c586bbd27086739e98c573e0ee0e7fe283e6e465f8d46989dcb7a3" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.285534 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.336929 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.337271 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.337705 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.340103 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.340833 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.341219 4836 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.341532 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.341811 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.342075 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388002 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388058 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388100 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388177 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388178 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388209 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388554 4836 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388607 4836 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.388620 4836 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.570730 4836 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.571058 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.571353 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.571583 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:44 crc kubenswrapper[4836]: I0217 14:10:44.575597 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 14:10:44 crc kubenswrapper[4836]: E0217 14:10:44.588741 4836 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18950e04c9e65df6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 14:10:42.170134006 +0000 UTC m=+268.513062275,LastTimestamp:2026-02-17 14:10:42.170134006 +0000 UTC m=+268.513062275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.293545 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.294131 4836 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71" exitCode=0 Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.294181 4836 scope.go:117] "RemoveContainer" containerID="14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.294335 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.295135 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.295356 4836 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.295650 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.295805 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.301401 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.301679 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.301887 4836 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.302097 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.317748 4836 scope.go:117] "RemoveContainer" containerID="2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.344965 4836 scope.go:117] "RemoveContainer" containerID="9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.371373 4836 scope.go:117] "RemoveContainer" containerID="edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.388839 4836 scope.go:117] "RemoveContainer" containerID="281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.404145 4836 scope.go:117] "RemoveContainer" containerID="bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.431549 4836 scope.go:117] "RemoveContainer" containerID="14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c" Feb 17 14:10:45 crc kubenswrapper[4836]: E0217 14:10:45.432859 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\": container with ID starting with 14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c not found: ID does not exist" containerID="14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.432932 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c"} err="failed to get container status \"14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\": rpc error: code = NotFound desc = could not find container \"14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c\": container with ID starting with 14e1fa543737ef273b75b2cffc78dc2edf4d1673c2c4095edda7c40db8238f2c not found: ID does not exist" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.432975 4836 scope.go:117] "RemoveContainer" containerID="2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b" Feb 17 14:10:45 crc kubenswrapper[4836]: E0217 14:10:45.433408 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\": container with ID starting with 2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b not found: ID does not exist" containerID="2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.433536 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b"} err="failed to get container status \"2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\": rpc error: code = NotFound desc = could not find container \"2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b\": container with ID starting with 2b36202875b19f7dafcfde168e006d9d47571de30240d5886e5e0920b74b609b not found: ID does not exist" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.433667 4836 scope.go:117] "RemoveContainer" containerID="9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3" Feb 17 14:10:45 crc kubenswrapper[4836]: E0217 14:10:45.434133 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\": container with ID starting with 9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3 not found: ID does not exist" containerID="9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.434171 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3"} err="failed to get container status \"9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\": rpc error: code = NotFound desc = could not find container \"9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3\": container with ID starting with 9bc90fa5f9363034d51c35721493655d10f8f8d5fad4dda86ca4c882963fb7a3 not found: ID does not exist" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.434212 4836 scope.go:117] "RemoveContainer" containerID="edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0" Feb 17 14:10:45 crc kubenswrapper[4836]: E0217 14:10:45.434526 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\": container with ID starting with edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0 not found: ID does not exist" containerID="edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.434580 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0"} err="failed to get container status \"edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\": rpc error: code = NotFound desc = could not find container \"edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0\": container with ID starting with edaaa277a27d7e7b9e8b42acaa9ee04fc0933b440441ad3f8abee458e96945c0 not found: ID does not exist" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.434619 4836 scope.go:117] "RemoveContainer" containerID="281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71" Feb 17 14:10:45 crc kubenswrapper[4836]: E0217 14:10:45.435153 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\": container with ID starting with 281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71 not found: ID does not exist" containerID="281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.435186 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71"} err="failed to get container status \"281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\": rpc error: code = NotFound desc = could not find container \"281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71\": container with ID starting with 281c6f51b95a96d557a29b449759275131f2fde30c4814e7692edfff96442e71 not found: ID does not exist" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.435225 4836 scope.go:117] "RemoveContainer" containerID="bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905" Feb 17 14:10:45 crc kubenswrapper[4836]: E0217 14:10:45.436028 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\": container with ID starting with bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905 not found: ID does not exist" containerID="bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905" Feb 17 14:10:45 crc kubenswrapper[4836]: I0217 14:10:45.436161 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905"} err="failed to get container status \"bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\": rpc error: code = NotFound desc = could not find container \"bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905\": container with ID starting with bf6a761df18c5e7699f2ec87d4b97f19da71f588cb12eff8e541140a361fa905 not found: ID does not exist" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.036103 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:10:47Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:10:47Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:10:47Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T14:10:47Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:77c09c30acdeaaf95ab463052841d32404d264d7b46bead6207afe51848d25e3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:b7b252dee7cfed79b278bcdec32ab88d70e98e83e6c0db9565a87d9e962cfecb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701350082},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:14398311b101163ddd1de78c093e161c5d3c9aac51a04e3d3d842fca6317ab0f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:5a091792b99bf4dfaec25f4c8e29da579e2f452d48b924c8323a18accb7f3290\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234637517},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:ad77d0ead8abca8b884fad3be18215dbe8b4f8f098053551e4a899298cf5c918\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b5338e2ca87e0b47fec93f55559f0ed6b39eef3ed3b7f085a4f0b205ccb86a5d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213306565},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.036890 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.037067 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.037217 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.037370 4836 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.037384 4836 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.189208 4836 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.189472 4836 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.189694 4836 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.189896 4836 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.190098 4836 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:47 crc kubenswrapper[4836]: I0217 14:10:47.190127 4836 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.190318 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="200ms" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.391901 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="400ms" Feb 17 14:10:47 crc kubenswrapper[4836]: E0217 14:10:47.793185 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="800ms" Feb 17 14:10:48 crc kubenswrapper[4836]: E0217 14:10:48.594485 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="1.6s" Feb 17 14:10:50 crc kubenswrapper[4836]: E0217 14:10:50.196453 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="3.2s" Feb 17 14:10:53 crc kubenswrapper[4836]: E0217 14:10:53.397863 4836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="6.4s" Feb 17 14:10:53 crc kubenswrapper[4836]: I0217 14:10:53.567310 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:53 crc kubenswrapper[4836]: I0217 14:10:53.567981 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:53 crc kubenswrapper[4836]: I0217 14:10:53.568371 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:53 crc kubenswrapper[4836]: I0217 14:10:53.568710 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:53 crc kubenswrapper[4836]: I0217 14:10:53.582135 4836 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:10:53 crc kubenswrapper[4836]: I0217 14:10:53.582174 4836 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:10:53 crc kubenswrapper[4836]: E0217 14:10:53.582922 4836 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:53 crc kubenswrapper[4836]: I0217 14:10:53.583995 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:54 crc kubenswrapper[4836]: I0217 14:10:54.356921 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bf91c6eb62e340b7bc0a6a5d6be5289b8047eab0dce6f276679ec8ec68eb5286"} Feb 17 14:10:54 crc kubenswrapper[4836]: I0217 14:10:54.574803 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:54 crc kubenswrapper[4836]: I0217 14:10:54.575251 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:54 crc kubenswrapper[4836]: I0217 14:10:54.575979 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:54 crc kubenswrapper[4836]: I0217 14:10:54.576534 4836 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:54 crc kubenswrapper[4836]: E0217 14:10:54.590697 4836 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18950e04c9e65df6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 14:10:42.170134006 +0000 UTC m=+268.513062275,LastTimestamp:2026-02-17 14:10:42.170134006 +0000 UTC m=+268.513062275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.372336 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.372394 4836 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc" exitCode=1 Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.372461 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc"} Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.373339 4836 scope.go:117] "RemoveContainer" containerID="b6196237bf10370f0d43ac7ba96f1fbc861d8690734cf883081f67616e6047dc" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.373604 4836 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.374402 4836 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.374820 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.374974 4836 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3077a3e0255093329a2e6d9fd21fb4fc0023c6b610b391d980a05fed4eddab3d" exitCode=0 Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.375041 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3077a3e0255093329a2e6d9fd21fb4fc0023c6b610b391d980a05fed4eddab3d"} Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.375137 4836 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.375160 4836 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:10:55 crc kubenswrapper[4836]: E0217 14:10:55.375411 4836 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.375545 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.376024 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.376764 4836 status_manager.go:851] "Failed to get status for pod" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.377374 4836 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.377992 4836 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.378508 4836 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.378773 4836 status_manager.go:851] "Failed to get status for pod" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" pod="openshift-marketplace/redhat-operators-5rfnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5rfnm\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 17 14:10:55 crc kubenswrapper[4836]: I0217 14:10:55.922367 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:10:56 crc kubenswrapper[4836]: I0217 14:10:56.384555 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df5604c711d07a00c95b46282b2130f03ca9f80f7eeadd5328d6a53447b2cafd"} Feb 17 14:10:56 crc kubenswrapper[4836]: I0217 14:10:56.384607 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a62f76c3004a201a9960d20c895684207c229b1ef49ad96499ee78628828853b"} Feb 17 14:10:56 crc kubenswrapper[4836]: I0217 14:10:56.391322 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 14:10:56 crc kubenswrapper[4836]: I0217 14:10:56.391370 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6e01917f3bbcd52cacc7f629e1bdcb4b5ebe5d9cbc8366c09b3f5f83d046b8f"} Feb 17 14:10:56 crc kubenswrapper[4836]: I0217 14:10:56.437892 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:10:56 crc kubenswrapper[4836]: I0217 14:10:56.446694 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:10:57 crc kubenswrapper[4836]: I0217 14:10:57.416509 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ece71941e8928ac74a98d60736174b728094d825a0933c160c294572fc2102f6"} Feb 17 14:10:57 crc kubenswrapper[4836]: I0217 14:10:57.417143 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"75dc129d71fbe6ef412d19324fa8cc3d4a50c1dbf2df6234354c4be811fcda50"} Feb 17 14:10:57 crc kubenswrapper[4836]: I0217 14:10:57.417198 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:10:58 crc kubenswrapper[4836]: I0217 14:10:58.425342 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"776243837c45b0b35c6d11141421a4357f14f398cc3c9d1ae1cfbd04cde7f3f2"} Feb 17 14:10:58 crc kubenswrapper[4836]: I0217 14:10:58.425897 4836 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:10:58 crc kubenswrapper[4836]: I0217 14:10:58.425921 4836 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:10:58 crc kubenswrapper[4836]: I0217 14:10:58.584520 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:58 crc kubenswrapper[4836]: I0217 14:10:58.584605 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:10:58 crc kubenswrapper[4836]: I0217 14:10:58.589582 4836 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]log ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]etcd ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-apiextensions-informers ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-apiextensions-controllers ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/crd-informer-synced ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 14:10:58 crc kubenswrapper[4836]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 14:10:58 crc kubenswrapper[4836]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/bootstrap-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/apiservice-registration-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/apiservice-discovery-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]autoregister-completion ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 14:10:58 crc kubenswrapper[4836]: livez check failed Feb 17 14:10:58 crc kubenswrapper[4836]: I0217 14:10:58.589650 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 14:11:03 crc kubenswrapper[4836]: I0217 14:11:03.434026 4836 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:11:03 crc kubenswrapper[4836]: I0217 14:11:03.463546 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:11:03 crc kubenswrapper[4836]: I0217 14:11:03.463559 4836 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:11:03 crc kubenswrapper[4836]: I0217 14:11:03.463605 4836 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:11:03 crc kubenswrapper[4836]: I0217 14:11:03.588644 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:11:03 crc kubenswrapper[4836]: I0217 14:11:03.591731 4836 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8349b29e-b284-4c0a-bda2-bda8a9c51c5d" Feb 17 14:11:04 crc kubenswrapper[4836]: I0217 14:11:04.469935 4836 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:11:04 crc kubenswrapper[4836]: I0217 14:11:04.469971 4836 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:11:04 crc kubenswrapper[4836]: I0217 14:11:04.473058 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:11:04 crc kubenswrapper[4836]: I0217 14:11:04.582506 4836 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8349b29e-b284-4c0a-bda2-bda8a9c51c5d" Feb 17 14:11:05 crc kubenswrapper[4836]: I0217 14:11:05.473574 4836 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:11:05 crc kubenswrapper[4836]: I0217 14:11:05.473609 4836 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15e7e5f3-c249-4609-937e-ffdc78580880" Feb 17 14:11:05 crc kubenswrapper[4836]: I0217 14:11:05.503069 4836 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8349b29e-b284-4c0a-bda2-bda8a9c51c5d" Feb 17 14:11:12 crc kubenswrapper[4836]: I0217 14:11:12.353713 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 14:11:13 crc kubenswrapper[4836]: I0217 14:11:13.411797 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 14:11:13 crc kubenswrapper[4836]: I0217 14:11:13.446678 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 14:11:13 crc kubenswrapper[4836]: I0217 14:11:13.474651 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 14:11:13 crc kubenswrapper[4836]: I0217 14:11:13.511966 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 14:11:14 crc kubenswrapper[4836]: I0217 14:11:14.256647 4836 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 14:11:14 crc kubenswrapper[4836]: I0217 14:11:14.328796 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 14:11:14 crc kubenswrapper[4836]: I0217 14:11:14.380900 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 14:11:14 crc kubenswrapper[4836]: I0217 14:11:14.830821 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 14:11:14 crc kubenswrapper[4836]: I0217 14:11:14.879427 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 14:11:14 crc kubenswrapper[4836]: I0217 14:11:14.879465 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.138732 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.157447 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.165800 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.184428 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.234654 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.340504 4836 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.598375 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.655430 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.811448 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 14:11:15 crc kubenswrapper[4836]: I0217 14:11:15.924911 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.054374 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.245940 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.294985 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.353813 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.399520 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.409549 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.474821 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.494930 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.607364 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.628819 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.650521 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.688605 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.701926 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.840431 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 14:11:16 crc kubenswrapper[4836]: I0217 14:11:16.858054 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.167449 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.197155 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.249909 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.273392 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.313616 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.325673 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.436352 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.451630 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.549770 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.568486 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.589123 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.640929 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.785071 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.790858 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.903682 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.950246 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 14:11:17 crc kubenswrapper[4836]: I0217 14:11:17.963460 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.025976 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.095995 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.157838 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.178952 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.184127 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.196551 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.241145 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.278342 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.278412 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.506777 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.510708 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.527766 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.665353 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.685935 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.727243 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.727763 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.737503 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.775321 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.971905 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 14:11:18 crc kubenswrapper[4836]: I0217 14:11:18.972649 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.026969 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.067865 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.097483 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.159902 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.238121 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.243450 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.335406 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.348445 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.419077 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.498345 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.499431 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.617753 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.621492 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.622256 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.660018 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.706048 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.838340 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.915205 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.920946 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 14:11:19 crc kubenswrapper[4836]: I0217 14:11:19.986183 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.003253 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.095614 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.167801 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.224381 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.461369 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.536835 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.604055 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.621464 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.649015 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.824408 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.868037 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 14:11:20 crc kubenswrapper[4836]: I0217 14:11:20.948991 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.107887 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.131895 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.159364 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.179682 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.260142 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.426728 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.478724 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.525120 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.635846 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.637363 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.685080 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.738627 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.823480 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.830958 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.839558 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.893921 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.905943 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.908387 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.942016 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 14:11:21 crc kubenswrapper[4836]: I0217 14:11:21.942374 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.044667 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.090619 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.134583 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.166787 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.166995 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.177380 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.216499 4836 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.236010 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.238332 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.288347 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.291658 4836 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.294350 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.317815 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.350733 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.405057 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.420011 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.490105 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.506547 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.561652 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.579279 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.629055 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.655236 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.670998 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 14:11:22 crc kubenswrapper[4836]: I0217 14:11:22.919808 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.103971 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.117143 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.159875 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.162952 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.191065 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.219675 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.423145 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.439934 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.445495 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.464466 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.490081 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.491546 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.575371 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.607940 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.609727 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.694549 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.702564 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.728603 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.779317 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.821662 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.832427 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 14:11:23 crc kubenswrapper[4836]: I0217 14:11:23.854156 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.030603 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.060416 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.062791 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.188193 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.238854 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.246219 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.277705 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.314000 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.450409 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.503415 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.544314 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.687101 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.711675 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.755676 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.781006 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.836394 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.903971 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 14:11:24 crc kubenswrapper[4836]: I0217 14:11:24.988016 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.016464 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.028432 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.053768 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.060152 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.061652 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.071332 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.075214 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.092333 4836 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.115665 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.119571 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.210846 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.237071 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.237099 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.409643 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.424701 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.458215 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.458230 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.473914 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.550000 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.550003 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.640353 4836 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.773642 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.803851 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 14:11:25 crc kubenswrapper[4836]: I0217 14:11:25.925150 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.062194 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.221026 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.258550 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.273751 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.325610 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.379113 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.433648 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.503149 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.541626 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.643218 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.662324 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.854930 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.897817 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 14:11:26 crc kubenswrapper[4836]: I0217 14:11:26.910715 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.131073 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.237367 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.350914 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.415907 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.477958 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.503025 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.552576 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.645150 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.706358 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.786280 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.817960 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 14:11:27 crc kubenswrapper[4836]: I0217 14:11:27.926653 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.146510 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.181935 4836 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.185250 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=47.185215921 podStartE2EDuration="47.185215921s" podCreationTimestamp="2026-02-17 14:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:11:02.943033577 +0000 UTC m=+289.285961866" watchObservedRunningTime="2026-02-17 14:11:28.185215921 +0000 UTC m=+314.528144190" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.186956 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5rfnm","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.187032 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rhsgl","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 14:11:28 crc kubenswrapper[4836]: E0217 14:11:28.187283 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" containerName="installer" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.187319 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" containerName="installer" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.187454 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb43a5d-f735-4891-912a-3ba9e47a4055" containerName="installer" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.188008 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khbdr","openshift-marketplace/redhat-marketplace-pxwhr","openshift-marketplace/community-operators-9w8zr","openshift-marketplace/redhat-operators-252vj","openshift-marketplace/certified-operators-vfmw4"] Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.188115 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.188483 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-252vj" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="registry-server" containerID="cri-o://6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa" gracePeriod=30 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.188595 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vfmw4" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="registry-server" containerID="cri-o://63859747d78e0b196aa1ae4f9aecdf579a3667fc25d5a072dbcb78b4447b6dc2" gracePeriod=30 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.188641 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" podUID="985bc83c-52fa-45dc-ab4f-6e47ee47683e" containerName="marketplace-operator" containerID="cri-o://4b1cfa0180186477ad01885b0380528a2ed9a9e38e3b90ab0219a2e26e3de881" gracePeriod=30 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.188711 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9w8zr" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="registry-server" containerID="cri-o://56a4ac051fd52f2fd8e193686dffb745df251c7f892fec72d600a2fa80ecbd34" gracePeriod=30 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.188814 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxwhr" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="registry-server" containerID="cri-o://c2eaa809f67d2bf376430950b6f31e802fbb9ae20ab0242708d65041bbaf3f07" gracePeriod=30 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.192092 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.197456 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.197514 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.197598 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnh7k\" (UniqueName: \"kubernetes.io/projected/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-kube-api-access-vnh7k\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.225262 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.225235428 podStartE2EDuration="25.225235428s" podCreationTimestamp="2026-02-17 14:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:11:28.221927865 +0000 UTC m=+314.564856134" watchObservedRunningTime="2026-02-17 14:11:28.225235428 +0000 UTC m=+314.568163707" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.235277 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.254816 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.300323 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.300383 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnh7k\" (UniqueName: \"kubernetes.io/projected/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-kube-api-access-vnh7k\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.300504 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.301885 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.309278 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.338170 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnh7k\" (UniqueName: \"kubernetes.io/projected/bd68f8c7-fdcc-449d-9f92-2f7afcb4917b-kube-api-access-vnh7k\") pod \"marketplace-operator-79b997595-rhsgl\" (UID: \"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.385242 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.411674 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.512952 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.545743 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.583845 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff" path="/var/lib/kubelet/pods/88cf2bb1-d70f-4b82-9b9a-9d7c7a4244ff/volumes" Feb 17 14:11:28 crc kubenswrapper[4836]: E0217 14:11:28.607022 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa is running failed: container process not found" containerID="6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 14:11:28 crc kubenswrapper[4836]: E0217 14:11:28.607451 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa is running failed: container process not found" containerID="6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 14:11:28 crc kubenswrapper[4836]: E0217 14:11:28.607786 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa is running failed: container process not found" containerID="6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 14:11:28 crc kubenswrapper[4836]: E0217 14:11:28.607842 4836 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-252vj" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="registry-server" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.615319 4836 generic.go:334] "Generic (PLEG): container finished" podID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerID="56a4ac051fd52f2fd8e193686dffb745df251c7f892fec72d600a2fa80ecbd34" exitCode=0 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.615396 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8zr" event={"ID":"089d1289-afe9-4ffe-9d96-ac10058335ed","Type":"ContainerDied","Data":"56a4ac051fd52f2fd8e193686dffb745df251c7f892fec72d600a2fa80ecbd34"} Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.615425 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8zr" event={"ID":"089d1289-afe9-4ffe-9d96-ac10058335ed","Type":"ContainerDied","Data":"aec4a035ba778cf216a49780b8ffa622c813a3d3daa4a826e68b03c1acc34c4d"} Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.615438 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec4a035ba778cf216a49780b8ffa622c813a3d3daa4a826e68b03c1acc34c4d" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.619893 4836 generic.go:334] "Generic (PLEG): container finished" podID="985bc83c-52fa-45dc-ab4f-6e47ee47683e" containerID="4b1cfa0180186477ad01885b0380528a2ed9a9e38e3b90ab0219a2e26e3de881" exitCode=0 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.619942 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" event={"ID":"985bc83c-52fa-45dc-ab4f-6e47ee47683e","Type":"ContainerDied","Data":"4b1cfa0180186477ad01885b0380528a2ed9a9e38e3b90ab0219a2e26e3de881"} Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.626105 4836 generic.go:334] "Generic (PLEG): container finished" podID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerID="c2eaa809f67d2bf376430950b6f31e802fbb9ae20ab0242708d65041bbaf3f07" exitCode=0 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.626277 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwhr" event={"ID":"e9f23804-837d-4d3c-94b7-7cdefe6e94df","Type":"ContainerDied","Data":"c2eaa809f67d2bf376430950b6f31e802fbb9ae20ab0242708d65041bbaf3f07"} Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.628818 4836 generic.go:334] "Generic (PLEG): container finished" podID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerID="63859747d78e0b196aa1ae4f9aecdf579a3667fc25d5a072dbcb78b4447b6dc2" exitCode=0 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.628857 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfmw4" event={"ID":"8762f2f2-8375-4fdd-8a29-ea2ab598afa1","Type":"ContainerDied","Data":"63859747d78e0b196aa1ae4f9aecdf579a3667fc25d5a072dbcb78b4447b6dc2"} Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.639168 4836 generic.go:334] "Generic (PLEG): container finished" podID="a172042c-7dc6-4cea-906e-3d9135523f15" containerID="6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa" exitCode=0 Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.639327 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-252vj" event={"ID":"a172042c-7dc6-4cea-906e-3d9135523f15","Type":"ContainerDied","Data":"6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa"} Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.640119 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.679829 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.698006 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.711362 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.711890 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-catalog-content\") pod \"089d1289-afe9-4ffe-9d96-ac10058335ed\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.720681 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.730006 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.786209 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rhsgl"] Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.803921 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "089d1289-afe9-4ffe-9d96-ac10058335ed" (UID: "089d1289-afe9-4ffe-9d96-ac10058335ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813093 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-utilities\") pod \"a172042c-7dc6-4cea-906e-3d9135523f15\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813144 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-catalog-content\") pod \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813174 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqqlz\" (UniqueName: \"kubernetes.io/projected/e9f23804-837d-4d3c-94b7-7cdefe6e94df-kube-api-access-hqqlz\") pod \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813198 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thgpf\" (UniqueName: \"kubernetes.io/projected/a172042c-7dc6-4cea-906e-3d9135523f15-kube-api-access-thgpf\") pod \"a172042c-7dc6-4cea-906e-3d9135523f15\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813220 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-utilities\") pod \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813251 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd5dt\" (UniqueName: \"kubernetes.io/projected/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-kube-api-access-xd5dt\") pod \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813285 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-operator-metrics\") pod \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813339 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-utilities\") pod \"089d1289-afe9-4ffe-9d96-ac10058335ed\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813379 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknx6\" (UniqueName: \"kubernetes.io/projected/089d1289-afe9-4ffe-9d96-ac10058335ed-kube-api-access-tknx6\") pod \"089d1289-afe9-4ffe-9d96-ac10058335ed\" (UID: \"089d1289-afe9-4ffe-9d96-ac10058335ed\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813406 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-catalog-content\") pod \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\" (UID: \"8762f2f2-8375-4fdd-8a29-ea2ab598afa1\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813433 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-catalog-content\") pod \"a172042c-7dc6-4cea-906e-3d9135523f15\" (UID: \"a172042c-7dc6-4cea-906e-3d9135523f15\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813474 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-utilities\") pod \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\" (UID: \"e9f23804-837d-4d3c-94b7-7cdefe6e94df\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813498 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-trusted-ca\") pod \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813526 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfrbt\" (UniqueName: \"kubernetes.io/projected/985bc83c-52fa-45dc-ab4f-6e47ee47683e-kube-api-access-sfrbt\") pod \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\" (UID: \"985bc83c-52fa-45dc-ab4f-6e47ee47683e\") " Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.813694 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.815061 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-utilities" (OuterVolumeSpecName: "utilities") pod "089d1289-afe9-4ffe-9d96-ac10058335ed" (UID: "089d1289-afe9-4ffe-9d96-ac10058335ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.815631 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-utilities" (OuterVolumeSpecName: "utilities") pod "8762f2f2-8375-4fdd-8a29-ea2ab598afa1" (UID: "8762f2f2-8375-4fdd-8a29-ea2ab598afa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.816635 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-utilities" (OuterVolumeSpecName: "utilities") pod "e9f23804-837d-4d3c-94b7-7cdefe6e94df" (UID: "e9f23804-837d-4d3c-94b7-7cdefe6e94df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.816729 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-utilities" (OuterVolumeSpecName: "utilities") pod "a172042c-7dc6-4cea-906e-3d9135523f15" (UID: "a172042c-7dc6-4cea-906e-3d9135523f15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.817460 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "985bc83c-52fa-45dc-ab4f-6e47ee47683e" (UID: "985bc83c-52fa-45dc-ab4f-6e47ee47683e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.821058 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "985bc83c-52fa-45dc-ab4f-6e47ee47683e" (UID: "985bc83c-52fa-45dc-ab4f-6e47ee47683e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.825642 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a172042c-7dc6-4cea-906e-3d9135523f15-kube-api-access-thgpf" (OuterVolumeSpecName: "kube-api-access-thgpf") pod "a172042c-7dc6-4cea-906e-3d9135523f15" (UID: "a172042c-7dc6-4cea-906e-3d9135523f15"). InnerVolumeSpecName "kube-api-access-thgpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.825872 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-kube-api-access-xd5dt" (OuterVolumeSpecName: "kube-api-access-xd5dt") pod "8762f2f2-8375-4fdd-8a29-ea2ab598afa1" (UID: "8762f2f2-8375-4fdd-8a29-ea2ab598afa1"). InnerVolumeSpecName "kube-api-access-xd5dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.825973 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f23804-837d-4d3c-94b7-7cdefe6e94df-kube-api-access-hqqlz" (OuterVolumeSpecName: "kube-api-access-hqqlz") pod "e9f23804-837d-4d3c-94b7-7cdefe6e94df" (UID: "e9f23804-837d-4d3c-94b7-7cdefe6e94df"). InnerVolumeSpecName "kube-api-access-hqqlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.837073 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985bc83c-52fa-45dc-ab4f-6e47ee47683e-kube-api-access-sfrbt" (OuterVolumeSpecName: "kube-api-access-sfrbt") pod "985bc83c-52fa-45dc-ab4f-6e47ee47683e" (UID: "985bc83c-52fa-45dc-ab4f-6e47ee47683e"). InnerVolumeSpecName "kube-api-access-sfrbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.845696 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089d1289-afe9-4ffe-9d96-ac10058335ed-kube-api-access-tknx6" (OuterVolumeSpecName: "kube-api-access-tknx6") pod "089d1289-afe9-4ffe-9d96-ac10058335ed" (UID: "089d1289-afe9-4ffe-9d96-ac10058335ed"). InnerVolumeSpecName "kube-api-access-tknx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.847441 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9f23804-837d-4d3c-94b7-7cdefe6e94df" (UID: "e9f23804-837d-4d3c-94b7-7cdefe6e94df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.887932 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.897487 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8762f2f2-8375-4fdd-8a29-ea2ab598afa1" (UID: "8762f2f2-8375-4fdd-8a29-ea2ab598afa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914422 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914457 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914467 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqqlz\" (UniqueName: \"kubernetes.io/projected/e9f23804-837d-4d3c-94b7-7cdefe6e94df-kube-api-access-hqqlz\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914479 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thgpf\" (UniqueName: \"kubernetes.io/projected/a172042c-7dc6-4cea-906e-3d9135523f15-kube-api-access-thgpf\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914489 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914498 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd5dt\" (UniqueName: \"kubernetes.io/projected/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-kube-api-access-xd5dt\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914509 4836 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914519 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089d1289-afe9-4ffe-9d96-ac10058335ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914529 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tknx6\" (UniqueName: \"kubernetes.io/projected/089d1289-afe9-4ffe-9d96-ac10058335ed-kube-api-access-tknx6\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914538 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8762f2f2-8375-4fdd-8a29-ea2ab598afa1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914586 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f23804-837d-4d3c-94b7-7cdefe6e94df-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914596 4836 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/985bc83c-52fa-45dc-ab4f-6e47ee47683e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.914605 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfrbt\" (UniqueName: \"kubernetes.io/projected/985bc83c-52fa-45dc-ab4f-6e47ee47683e-kube-api-access-sfrbt\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.960822 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a172042c-7dc6-4cea-906e-3d9135523f15" (UID: "a172042c-7dc6-4cea-906e-3d9135523f15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:28 crc kubenswrapper[4836]: I0217 14:11:28.980937 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.015249 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a172042c-7dc6-4cea-906e-3d9135523f15-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.114356 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.151554 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.378873 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.646209 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" event={"ID":"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b","Type":"ContainerStarted","Data":"a6c42aa007fef7ea456d745f9c58e51c9dca34bd60748608d671a37224d1bf1e"} Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.646275 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" event={"ID":"bd68f8c7-fdcc-449d-9f92-2f7afcb4917b","Type":"ContainerStarted","Data":"8bea1300461bd8244149c813fa4e26a87b305c74e5cde9750a56a339a4aa01e3"} Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.646574 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.648040 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" event={"ID":"985bc83c-52fa-45dc-ab4f-6e47ee47683e","Type":"ContainerDied","Data":"7d0ca8f5e10670b96b45ab236df0ffcb5b0c0577a99d998beb3a30327978aa5e"} Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.648060 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khbdr" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.648108 4836 scope.go:117] "RemoveContainer" containerID="4b1cfa0180186477ad01885b0380528a2ed9a9e38e3b90ab0219a2e26e3de881" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.650288 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwhr" event={"ID":"e9f23804-837d-4d3c-94b7-7cdefe6e94df","Type":"ContainerDied","Data":"aef23167292c1dafb12389117081e87d3bd5bee8abc67ecc65bf3cd0a4bf9f1c"} Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.650417 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxwhr" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.652978 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.655371 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfmw4" event={"ID":"8762f2f2-8375-4fdd-8a29-ea2ab598afa1","Type":"ContainerDied","Data":"a5a3c8cb6babc233f1c2ac1e8dd3635788628a1cb1bb705f8a779b47d0562e2b"} Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.655606 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfmw4" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.662994 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w8zr" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.663671 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-252vj" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.663751 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-252vj" event={"ID":"a172042c-7dc6-4cea-906e-3d9135523f15","Type":"ContainerDied","Data":"8ef482fc8eb2712be43ba1d606607d7a887e18d38349afed73ed063a65b62543"} Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.668984 4836 scope.go:117] "RemoveContainer" containerID="c2eaa809f67d2bf376430950b6f31e802fbb9ae20ab0242708d65041bbaf3f07" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.671279 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rhsgl" podStartSLOduration=4.671255436 podStartE2EDuration="4.671255436s" podCreationTimestamp="2026-02-17 14:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:11:29.668230691 +0000 UTC m=+316.011158980" watchObservedRunningTime="2026-02-17 14:11:29.671255436 +0000 UTC m=+316.014183705" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.686724 4836 scope.go:117] "RemoveContainer" containerID="03ed8f2f65fff33093a6776fd604dcab5d3520ae863a96ba61bdb418d4e8293c" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.749746 4836 scope.go:117] "RemoveContainer" containerID="73060b123dbcdb54cacfb96235e77305156ac3a055b89a97013a4725f13fbc92" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.751986 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-252vj"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.768370 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-252vj"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.770433 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khbdr"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.773842 4836 scope.go:117] "RemoveContainer" containerID="63859747d78e0b196aa1ae4f9aecdf579a3667fc25d5a072dbcb78b4447b6dc2" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.776934 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khbdr"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.780342 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwhr"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.784033 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwhr"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.794503 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfmw4"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.798211 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vfmw4"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.807052 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9w8zr"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.808887 4836 scope.go:117] "RemoveContainer" containerID="c45d87fc95c2bb97baee74cdf9eb8890199ccbcb1361ab9d40701a3bf1b0aef6" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.815918 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9w8zr"] Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.831109 4836 scope.go:117] "RemoveContainer" containerID="fdc430f3f9d22a422de0b99423af704e6cc0b0c2a36fc9623c6db36600886e79" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.847933 4836 scope.go:117] "RemoveContainer" containerID="6b92cf31cabf6eec3e2e948b5b114dee8dd77bf37e267af921b53ecc81d156aa" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.869472 4836 scope.go:117] "RemoveContainer" containerID="5fe13927481d2948ed6f845b9678013bbf8fcbf061f7116c0ec82c5abd9ee696" Feb 17 14:11:29 crc kubenswrapper[4836]: I0217 14:11:29.888102 4836 scope.go:117] "RemoveContainer" containerID="fbdef3e9d702e26b2d9eab100a7cb39741759b5bc646072d63aa2cde6951ee43" Feb 17 14:11:30 crc kubenswrapper[4836]: I0217 14:11:30.576828 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" path="/var/lib/kubelet/pods/089d1289-afe9-4ffe-9d96-ac10058335ed/volumes" Feb 17 14:11:30 crc kubenswrapper[4836]: I0217 14:11:30.577639 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" path="/var/lib/kubelet/pods/8762f2f2-8375-4fdd-8a29-ea2ab598afa1/volumes" Feb 17 14:11:30 crc kubenswrapper[4836]: I0217 14:11:30.578423 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985bc83c-52fa-45dc-ab4f-6e47ee47683e" path="/var/lib/kubelet/pods/985bc83c-52fa-45dc-ab4f-6e47ee47683e/volumes" Feb 17 14:11:30 crc kubenswrapper[4836]: I0217 14:11:30.579556 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" path="/var/lib/kubelet/pods/a172042c-7dc6-4cea-906e-3d9135523f15/volumes" Feb 17 14:11:30 crc kubenswrapper[4836]: I0217 14:11:30.580253 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" path="/var/lib/kubelet/pods/e9f23804-837d-4d3c-94b7-7cdefe6e94df/volumes" Feb 17 14:11:31 crc kubenswrapper[4836]: I0217 14:11:31.244875 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 14:11:31 crc kubenswrapper[4836]: I0217 14:11:31.626437 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 14:11:36 crc kubenswrapper[4836]: I0217 14:11:36.975636 4836 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 14:11:36 crc kubenswrapper[4836]: I0217 14:11:36.976706 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e" gracePeriod=5 Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.550430 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.551329 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.576287 4836 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.587484 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.587528 4836 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9043d749-7e9c-488b-b0a8-bee71a618a8c" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.590774 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.590844 4836 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9043d749-7e9c-488b-b0a8-bee71a618a8c" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.705966 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706052 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706156 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706163 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706193 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706287 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706324 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706408 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706516 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.706985 4836 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.707000 4836 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.707010 4836 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.707019 4836 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.717396 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.744608 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.744667 4836 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e" exitCode=137 Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.744716 4836 scope.go:117] "RemoveContainer" containerID="8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.744754 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.765121 4836 scope.go:117] "RemoveContainer" containerID="8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e" Feb 17 14:11:42 crc kubenswrapper[4836]: E0217 14:11:42.765760 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e\": container with ID starting with 8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e not found: ID does not exist" containerID="8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.765817 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e"} err="failed to get container status \"8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e\": rpc error: code = NotFound desc = could not find container \"8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e\": container with ID starting with 8757baf5bc7be098bf127cb1aafc088ae6d42ff3064423215125a6bb9cdaa67e not found: ID does not exist" Feb 17 14:11:42 crc kubenswrapper[4836]: I0217 14:11:42.808199 4836 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:44 crc kubenswrapper[4836]: I0217 14:11:44.575833 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 14:12:17 crc kubenswrapper[4836]: I0217 14:12:17.739197 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5l6x4"] Feb 17 14:12:17 crc kubenswrapper[4836]: I0217 14:12:17.739919 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" podUID="8c77bcf1-4025-4c35-9580-41e9a61195e8" containerName="controller-manager" containerID="cri-o://48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c" gracePeriod=30 Feb 17 14:12:17 crc kubenswrapper[4836]: I0217 14:12:17.818629 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4"] Feb 17 14:12:17 crc kubenswrapper[4836]: I0217 14:12:17.818940 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" podUID="5ad14aa6-962d-4f8f-babe-745f65d63560" containerName="route-controller-manager" containerID="cri-o://7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e" gracePeriod=30 Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.103103 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.156823 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202398 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-client-ca\") pod \"8c77bcf1-4025-4c35-9580-41e9a61195e8\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202499 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad14aa6-962d-4f8f-babe-745f65d63560-serving-cert\") pod \"5ad14aa6-962d-4f8f-babe-745f65d63560\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202597 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert\") pod \"8c77bcf1-4025-4c35-9580-41e9a61195e8\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202638 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6ml\" (UniqueName: \"kubernetes.io/projected/8c77bcf1-4025-4c35-9580-41e9a61195e8-kube-api-access-nj6ml\") pod \"8c77bcf1-4025-4c35-9580-41e9a61195e8\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202685 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-config\") pod \"5ad14aa6-962d-4f8f-babe-745f65d63560\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202720 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-client-ca\") pod \"5ad14aa6-962d-4f8f-babe-745f65d63560\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202747 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config\") pod \"8c77bcf1-4025-4c35-9580-41e9a61195e8\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202809 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles\") pod \"8c77bcf1-4025-4c35-9580-41e9a61195e8\" (UID: \"8c77bcf1-4025-4c35-9580-41e9a61195e8\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.202883 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhngg\" (UniqueName: \"kubernetes.io/projected/5ad14aa6-962d-4f8f-babe-745f65d63560-kube-api-access-fhngg\") pod \"5ad14aa6-962d-4f8f-babe-745f65d63560\" (UID: \"5ad14aa6-962d-4f8f-babe-745f65d63560\") " Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.204502 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-config" (OuterVolumeSpecName: "config") pod "5ad14aa6-962d-4f8f-babe-745f65d63560" (UID: "5ad14aa6-962d-4f8f-babe-745f65d63560"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.205197 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8c77bcf1-4025-4c35-9580-41e9a61195e8" (UID: "8c77bcf1-4025-4c35-9580-41e9a61195e8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.204789 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config" (OuterVolumeSpecName: "config") pod "8c77bcf1-4025-4c35-9580-41e9a61195e8" (UID: "8c77bcf1-4025-4c35-9580-41e9a61195e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.205314 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ad14aa6-962d-4f8f-babe-745f65d63560" (UID: "5ad14aa6-962d-4f8f-babe-745f65d63560"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.205492 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c77bcf1-4025-4c35-9580-41e9a61195e8" (UID: "8c77bcf1-4025-4c35-9580-41e9a61195e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.211937 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad14aa6-962d-4f8f-babe-745f65d63560-kube-api-access-fhngg" (OuterVolumeSpecName: "kube-api-access-fhngg") pod "5ad14aa6-962d-4f8f-babe-745f65d63560" (UID: "5ad14aa6-962d-4f8f-babe-745f65d63560"). InnerVolumeSpecName "kube-api-access-fhngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.211942 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad14aa6-962d-4f8f-babe-745f65d63560-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ad14aa6-962d-4f8f-babe-745f65d63560" (UID: "5ad14aa6-962d-4f8f-babe-745f65d63560"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.211950 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c77bcf1-4025-4c35-9580-41e9a61195e8" (UID: "8c77bcf1-4025-4c35-9580-41e9a61195e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.212241 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c77bcf1-4025-4c35-9580-41e9a61195e8-kube-api-access-nj6ml" (OuterVolumeSpecName: "kube-api-access-nj6ml") pod "8c77bcf1-4025-4c35-9580-41e9a61195e8" (UID: "8c77bcf1-4025-4c35-9580-41e9a61195e8"). InnerVolumeSpecName "kube-api-access-nj6ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304396 4836 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304455 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad14aa6-962d-4f8f-babe-745f65d63560-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304473 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c77bcf1-4025-4c35-9580-41e9a61195e8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304494 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6ml\" (UniqueName: \"kubernetes.io/projected/8c77bcf1-4025-4c35-9580-41e9a61195e8-kube-api-access-nj6ml\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304514 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304530 4836 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ad14aa6-962d-4f8f-babe-745f65d63560-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304546 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304563 4836 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c77bcf1-4025-4c35-9580-41e9a61195e8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.304580 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhngg\" (UniqueName: \"kubernetes.io/projected/5ad14aa6-962d-4f8f-babe-745f65d63560-kube-api-access-fhngg\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.384702 4836 generic.go:334] "Generic (PLEG): container finished" podID="5ad14aa6-962d-4f8f-babe-745f65d63560" containerID="7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e" exitCode=0 Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.384754 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" event={"ID":"5ad14aa6-962d-4f8f-babe-745f65d63560","Type":"ContainerDied","Data":"7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e"} Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.384788 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.384826 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4" event={"ID":"5ad14aa6-962d-4f8f-babe-745f65d63560","Type":"ContainerDied","Data":"5a18cb47469c9084e91c362d0628474be4ea76582e846e1e93705e36c466141f"} Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.384851 4836 scope.go:117] "RemoveContainer" containerID="7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.387550 4836 generic.go:334] "Generic (PLEG): container finished" podID="8c77bcf1-4025-4c35-9580-41e9a61195e8" containerID="48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c" exitCode=0 Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.387599 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" event={"ID":"8c77bcf1-4025-4c35-9580-41e9a61195e8","Type":"ContainerDied","Data":"48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c"} Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.387619 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" event={"ID":"8c77bcf1-4025-4c35-9580-41e9a61195e8","Type":"ContainerDied","Data":"b99d73db17eb9c6b2aa85ca03f0903902f643a2f2fbc708d9b4c51f4e9d1ede7"} Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.387625 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5l6x4" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.413016 4836 scope.go:117] "RemoveContainer" containerID="7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e" Feb 17 14:12:18 crc kubenswrapper[4836]: E0217 14:12:18.414051 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e\": container with ID starting with 7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e not found: ID does not exist" containerID="7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.414084 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e"} err="failed to get container status \"7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e\": rpc error: code = NotFound desc = could not find container \"7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e\": container with ID starting with 7bf8b1ad3766a39524e948476d227ffd5480f4d617fc83f3457ebccacb5d8b7e not found: ID does not exist" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.414136 4836 scope.go:117] "RemoveContainer" containerID="48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.441910 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5l6x4"] Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.448823 4836 scope.go:117] "RemoveContainer" containerID="48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c" Feb 17 14:12:18 crc kubenswrapper[4836]: E0217 14:12:18.449605 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c\": container with ID starting with 48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c not found: ID does not exist" containerID="48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.449732 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c"} err="failed to get container status \"48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c\": rpc error: code = NotFound desc = could not find container \"48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c\": container with ID starting with 48a4ab3353f6c1184058baaeccdfdd55224e71bf388392ae1f7582ff6b6cbd0c not found: ID does not exist" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.451037 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5l6x4"] Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.455392 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4"] Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.459598 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mmw4"] Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.577061 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad14aa6-962d-4f8f-babe-745f65d63560" path="/var/lib/kubelet/pods/5ad14aa6-962d-4f8f-babe-745f65d63560/volumes" Feb 17 14:12:18 crc kubenswrapper[4836]: I0217 14:12:18.578012 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c77bcf1-4025-4c35-9580-41e9a61195e8" path="/var/lib/kubelet/pods/8c77bcf1-4025-4c35-9580-41e9a61195e8/volumes" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.162508 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69d6c474bd-9hpt4"] Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163251 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="extract-content" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163266 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="extract-content" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163274 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163281 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163312 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163320 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163329 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c77bcf1-4025-4c35-9580-41e9a61195e8" containerName="controller-manager" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163341 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c77bcf1-4025-4c35-9580-41e9a61195e8" containerName="controller-manager" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163354 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad14aa6-962d-4f8f-babe-745f65d63560" containerName="route-controller-manager" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163359 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad14aa6-962d-4f8f-babe-745f65d63560" containerName="route-controller-manager" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163366 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985bc83c-52fa-45dc-ab4f-6e47ee47683e" containerName="marketplace-operator" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163372 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="985bc83c-52fa-45dc-ab4f-6e47ee47683e" containerName="marketplace-operator" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163383 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163389 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163401 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="extract-utilities" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163407 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="extract-utilities" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163414 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="extract-content" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163420 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="extract-content" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163428 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="extract-utilities" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163434 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="extract-utilities" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163444 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="extract-content" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163450 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="extract-content" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163458 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="extract-utilities" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163464 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="extract-utilities" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163471 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163477 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163486 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="extract-content" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163492 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="extract-content" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163499 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163505 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.163513 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="extract-utilities" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163519 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="extract-utilities" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163615 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="a172042c-7dc6-4cea-906e-3d9135523f15" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163629 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="8762f2f2-8375-4fdd-8a29-ea2ab598afa1" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163636 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="985bc83c-52fa-45dc-ab4f-6e47ee47683e" containerName="marketplace-operator" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163643 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c77bcf1-4025-4c35-9580-41e9a61195e8" containerName="controller-manager" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163650 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163659 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f23804-837d-4d3c-94b7-7cdefe6e94df" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163668 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="089d1289-afe9-4ffe-9d96-ac10058335ed" containerName="registry-server" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.163674 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad14aa6-962d-4f8f-babe-745f65d63560" containerName="route-controller-manager" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.164164 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.166644 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.166652 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.167458 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.167523 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.167522 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.167775 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.176337 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.182585 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69d6c474bd-9hpt4"] Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.198957 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb"] Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.199897 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.201730 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.202077 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.202899 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.203030 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.203165 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.203447 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.224406 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb"] Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.286000 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69d6c474bd-9hpt4"] Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.286691 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-wttn9 proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" podUID="e4797214-796b-4e39-ae05-c719bbffd7bf" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.315341 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb"] Feb 17 14:12:19 crc kubenswrapper[4836]: E0217 14:12:19.315978 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-h9jbv serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" podUID="34e63fa4-25c0-40bc-85bf-9428bc0842b0" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.345929 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-client-ca\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.345991 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e63fa4-25c0-40bc-85bf-9428bc0842b0-serving-cert\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.346164 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-config\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.346250 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-client-ca\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.346338 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-proxy-ca-bundles\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.346383 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-config\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.346442 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4797214-796b-4e39-ae05-c719bbffd7bf-serving-cert\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.346470 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wttn9\" (UniqueName: \"kubernetes.io/projected/e4797214-796b-4e39-ae05-c719bbffd7bf-kube-api-access-wttn9\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.346647 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9jbv\" (UniqueName: \"kubernetes.io/projected/34e63fa4-25c0-40bc-85bf-9428bc0842b0-kube-api-access-h9jbv\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.397018 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.397051 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.407982 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.414222 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448429 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9jbv\" (UniqueName: \"kubernetes.io/projected/34e63fa4-25c0-40bc-85bf-9428bc0842b0-kube-api-access-h9jbv\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448510 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-client-ca\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448539 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e63fa4-25c0-40bc-85bf-9428bc0842b0-serving-cert\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448568 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-config\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448595 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-client-ca\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448625 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-proxy-ca-bundles\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448670 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-config\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448710 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4797214-796b-4e39-ae05-c719bbffd7bf-serving-cert\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.448743 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wttn9\" (UniqueName: \"kubernetes.io/projected/e4797214-796b-4e39-ae05-c719bbffd7bf-kube-api-access-wttn9\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.449658 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-client-ca\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.450066 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-client-ca\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.450495 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-proxy-ca-bundles\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.450696 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-config\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.450800 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-config\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.455766 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4797214-796b-4e39-ae05-c719bbffd7bf-serving-cert\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.457841 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e63fa4-25c0-40bc-85bf-9428bc0842b0-serving-cert\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.466360 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9jbv\" (UniqueName: \"kubernetes.io/projected/34e63fa4-25c0-40bc-85bf-9428bc0842b0-kube-api-access-h9jbv\") pod \"route-controller-manager-6bb9976d8b-krcmb\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.474072 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wttn9\" (UniqueName: \"kubernetes.io/projected/e4797214-796b-4e39-ae05-c719bbffd7bf-kube-api-access-wttn9\") pod \"controller-manager-69d6c474bd-9hpt4\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650536 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-config\") pod \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650630 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-client-ca\") pod \"e4797214-796b-4e39-ae05-c719bbffd7bf\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650716 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-proxy-ca-bundles\") pod \"e4797214-796b-4e39-ae05-c719bbffd7bf\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650790 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e63fa4-25c0-40bc-85bf-9428bc0842b0-serving-cert\") pod \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650811 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9jbv\" (UniqueName: \"kubernetes.io/projected/34e63fa4-25c0-40bc-85bf-9428bc0842b0-kube-api-access-h9jbv\") pod \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650859 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-config\") pod \"e4797214-796b-4e39-ae05-c719bbffd7bf\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650887 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-client-ca\") pod \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\" (UID: \"34e63fa4-25c0-40bc-85bf-9428bc0842b0\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650925 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wttn9\" (UniqueName: \"kubernetes.io/projected/e4797214-796b-4e39-ae05-c719bbffd7bf-kube-api-access-wttn9\") pod \"e4797214-796b-4e39-ae05-c719bbffd7bf\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.650946 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4797214-796b-4e39-ae05-c719bbffd7bf-serving-cert\") pod \"e4797214-796b-4e39-ae05-c719bbffd7bf\" (UID: \"e4797214-796b-4e39-ae05-c719bbffd7bf\") " Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.651905 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "e4797214-796b-4e39-ae05-c719bbffd7bf" (UID: "e4797214-796b-4e39-ae05-c719bbffd7bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.652065 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-config" (OuterVolumeSpecName: "config") pod "34e63fa4-25c0-40bc-85bf-9428bc0842b0" (UID: "34e63fa4-25c0-40bc-85bf-9428bc0842b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.652657 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "34e63fa4-25c0-40bc-85bf-9428bc0842b0" (UID: "34e63fa4-25c0-40bc-85bf-9428bc0842b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.652940 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-config" (OuterVolumeSpecName: "config") pod "e4797214-796b-4e39-ae05-c719bbffd7bf" (UID: "e4797214-796b-4e39-ae05-c719bbffd7bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.653013 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e4797214-796b-4e39-ae05-c719bbffd7bf" (UID: "e4797214-796b-4e39-ae05-c719bbffd7bf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.655592 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e63fa4-25c0-40bc-85bf-9428bc0842b0-kube-api-access-h9jbv" (OuterVolumeSpecName: "kube-api-access-h9jbv") pod "34e63fa4-25c0-40bc-85bf-9428bc0842b0" (UID: "34e63fa4-25c0-40bc-85bf-9428bc0842b0"). InnerVolumeSpecName "kube-api-access-h9jbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.656886 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e63fa4-25c0-40bc-85bf-9428bc0842b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "34e63fa4-25c0-40bc-85bf-9428bc0842b0" (UID: "34e63fa4-25c0-40bc-85bf-9428bc0842b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.657017 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4797214-796b-4e39-ae05-c719bbffd7bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e4797214-796b-4e39-ae05-c719bbffd7bf" (UID: "e4797214-796b-4e39-ae05-c719bbffd7bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.659391 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4797214-796b-4e39-ae05-c719bbffd7bf-kube-api-access-wttn9" (OuterVolumeSpecName: "kube-api-access-wttn9") pod "e4797214-796b-4e39-ae05-c719bbffd7bf" (UID: "e4797214-796b-4e39-ae05-c719bbffd7bf"). InnerVolumeSpecName "kube-api-access-wttn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752595 4836 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752657 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e63fa4-25c0-40bc-85bf-9428bc0842b0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752672 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9jbv\" (UniqueName: \"kubernetes.io/projected/34e63fa4-25c0-40bc-85bf-9428bc0842b0-kube-api-access-h9jbv\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752689 4836 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752706 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752718 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4797214-796b-4e39-ae05-c719bbffd7bf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752731 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wttn9\" (UniqueName: \"kubernetes.io/projected/e4797214-796b-4e39-ae05-c719bbffd7bf-kube-api-access-wttn9\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752744 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e63fa4-25c0-40bc-85bf-9428bc0842b0-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:19 crc kubenswrapper[4836]: I0217 14:12:19.752757 4836 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4797214-796b-4e39-ae05-c719bbffd7bf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.403670 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69d6c474bd-9hpt4" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.403761 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.454477 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq"] Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.455200 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.459228 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.459483 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.461108 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.461381 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.461551 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.468581 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.474019 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb"] Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.480439 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb9976d8b-krcmb"] Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.483711 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq"] Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.491032 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69d6c474bd-9hpt4"] Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.493592 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69d6c474bd-9hpt4"] Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.564416 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-client-ca\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.564465 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrs9m\" (UniqueName: \"kubernetes.io/projected/a41b80c5-58ef-4d96-a176-02d0618297ee-kube-api-access-lrs9m\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.564560 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-config\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.564599 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a41b80c5-58ef-4d96-a176-02d0618297ee-serving-cert\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.575860 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e63fa4-25c0-40bc-85bf-9428bc0842b0" path="/var/lib/kubelet/pods/34e63fa4-25c0-40bc-85bf-9428bc0842b0/volumes" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.576330 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4797214-796b-4e39-ae05-c719bbffd7bf" path="/var/lib/kubelet/pods/e4797214-796b-4e39-ae05-c719bbffd7bf/volumes" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.666171 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-config\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.666231 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a41b80c5-58ef-4d96-a176-02d0618297ee-serving-cert\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.666255 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-client-ca\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.666274 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrs9m\" (UniqueName: \"kubernetes.io/projected/a41b80c5-58ef-4d96-a176-02d0618297ee-kube-api-access-lrs9m\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.667520 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-config\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.667712 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-client-ca\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.670872 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a41b80c5-58ef-4d96-a176-02d0618297ee-serving-cert\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.685406 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrs9m\" (UniqueName: \"kubernetes.io/projected/a41b80c5-58ef-4d96-a176-02d0618297ee-kube-api-access-lrs9m\") pod \"route-controller-manager-655d9f6b-9w5qq\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:20 crc kubenswrapper[4836]: I0217 14:12:20.777770 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:21 crc kubenswrapper[4836]: I0217 14:12:21.016633 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq"] Feb 17 14:12:21 crc kubenswrapper[4836]: I0217 14:12:21.411785 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" event={"ID":"a41b80c5-58ef-4d96-a176-02d0618297ee","Type":"ContainerStarted","Data":"ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb"} Feb 17 14:12:21 crc kubenswrapper[4836]: I0217 14:12:21.412186 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" event={"ID":"a41b80c5-58ef-4d96-a176-02d0618297ee","Type":"ContainerStarted","Data":"4ca4605b7bfe00847e11510ae10a3a3ad08cfbb71d140733665acea3abcefee6"} Feb 17 14:12:21 crc kubenswrapper[4836]: I0217 14:12:21.413985 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:21 crc kubenswrapper[4836]: I0217 14:12:21.440518 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" podStartSLOduration=2.440492006 podStartE2EDuration="2.440492006s" podCreationTimestamp="2026-02-17 14:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:12:21.439230827 +0000 UTC m=+367.782159106" watchObservedRunningTime="2026-02-17 14:12:21.440492006 +0000 UTC m=+367.783420285" Feb 17 14:12:21 crc kubenswrapper[4836]: I0217 14:12:21.564275 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.150227 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85f58dddc7-nc7rr"] Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.151359 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.156052 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.156052 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.157511 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.157831 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.158487 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.158714 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.161746 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.177783 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85f58dddc7-nc7rr"] Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.303407 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-client-ca\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.303468 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdjgk\" (UniqueName: \"kubernetes.io/projected/3135ca20-3162-4278-bbd7-de1d6f977dfe-kube-api-access-hdjgk\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.303516 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-proxy-ca-bundles\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.303725 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3135ca20-3162-4278-bbd7-de1d6f977dfe-serving-cert\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.303861 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-config\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.405496 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-proxy-ca-bundles\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.405976 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3135ca20-3162-4278-bbd7-de1d6f977dfe-serving-cert\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.406119 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-config\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.406335 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-client-ca\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.406450 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdjgk\" (UniqueName: \"kubernetes.io/projected/3135ca20-3162-4278-bbd7-de1d6f977dfe-kube-api-access-hdjgk\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.407984 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-proxy-ca-bundles\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.408098 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-client-ca\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.408244 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-config\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.414078 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3135ca20-3162-4278-bbd7-de1d6f977dfe-serving-cert\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.430470 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdjgk\" (UniqueName: \"kubernetes.io/projected/3135ca20-3162-4278-bbd7-de1d6f977dfe-kube-api-access-hdjgk\") pod \"controller-manager-85f58dddc7-nc7rr\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.477960 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:23 crc kubenswrapper[4836]: I0217 14:12:23.689494 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85f58dddc7-nc7rr"] Feb 17 14:12:24 crc kubenswrapper[4836]: I0217 14:12:24.434015 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" event={"ID":"3135ca20-3162-4278-bbd7-de1d6f977dfe","Type":"ContainerStarted","Data":"7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a"} Feb 17 14:12:24 crc kubenswrapper[4836]: I0217 14:12:24.434551 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" event={"ID":"3135ca20-3162-4278-bbd7-de1d6f977dfe","Type":"ContainerStarted","Data":"7e2281e26a53d8122732046bbdbf7cffb006b03fc9fd9923b35c064e85e5470c"} Feb 17 14:12:24 crc kubenswrapper[4836]: I0217 14:12:24.434576 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:24 crc kubenswrapper[4836]: I0217 14:12:24.444252 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:24 crc kubenswrapper[4836]: I0217 14:12:24.455388 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" podStartSLOduration=5.455362852 podStartE2EDuration="5.455362852s" podCreationTimestamp="2026-02-17 14:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:12:24.455275319 +0000 UTC m=+370.798203598" watchObservedRunningTime="2026-02-17 14:12:24.455362852 +0000 UTC m=+370.798291121" Feb 17 14:12:29 crc kubenswrapper[4836]: I0217 14:12:29.765275 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:12:29 crc kubenswrapper[4836]: I0217 14:12:29.765387 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:12:37 crc kubenswrapper[4836]: I0217 14:12:37.717742 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq"] Feb 17 14:12:37 crc kubenswrapper[4836]: I0217 14:12:37.718563 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" podUID="a41b80c5-58ef-4d96-a176-02d0618297ee" containerName="route-controller-manager" containerID="cri-o://ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb" gracePeriod=30 Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.100424 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.202273 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a41b80c5-58ef-4d96-a176-02d0618297ee-serving-cert\") pod \"a41b80c5-58ef-4d96-a176-02d0618297ee\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.202435 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-config\") pod \"a41b80c5-58ef-4d96-a176-02d0618297ee\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.202474 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-client-ca\") pod \"a41b80c5-58ef-4d96-a176-02d0618297ee\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.202663 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrs9m\" (UniqueName: \"kubernetes.io/projected/a41b80c5-58ef-4d96-a176-02d0618297ee-kube-api-access-lrs9m\") pod \"a41b80c5-58ef-4d96-a176-02d0618297ee\" (UID: \"a41b80c5-58ef-4d96-a176-02d0618297ee\") " Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.204211 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "a41b80c5-58ef-4d96-a176-02d0618297ee" (UID: "a41b80c5-58ef-4d96-a176-02d0618297ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.204199 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-config" (OuterVolumeSpecName: "config") pod "a41b80c5-58ef-4d96-a176-02d0618297ee" (UID: "a41b80c5-58ef-4d96-a176-02d0618297ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.208662 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41b80c5-58ef-4d96-a176-02d0618297ee-kube-api-access-lrs9m" (OuterVolumeSpecName: "kube-api-access-lrs9m") pod "a41b80c5-58ef-4d96-a176-02d0618297ee" (UID: "a41b80c5-58ef-4d96-a176-02d0618297ee"). InnerVolumeSpecName "kube-api-access-lrs9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.208918 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41b80c5-58ef-4d96-a176-02d0618297ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a41b80c5-58ef-4d96-a176-02d0618297ee" (UID: "a41b80c5-58ef-4d96-a176-02d0618297ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.304464 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrs9m\" (UniqueName: \"kubernetes.io/projected/a41b80c5-58ef-4d96-a176-02d0618297ee-kube-api-access-lrs9m\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.304514 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a41b80c5-58ef-4d96-a176-02d0618297ee-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.304528 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.304540 4836 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a41b80c5-58ef-4d96-a176-02d0618297ee-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.526471 4836 generic.go:334] "Generic (PLEG): container finished" podID="a41b80c5-58ef-4d96-a176-02d0618297ee" containerID="ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb" exitCode=0 Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.526527 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" event={"ID":"a41b80c5-58ef-4d96-a176-02d0618297ee","Type":"ContainerDied","Data":"ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb"} Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.526551 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.526568 4836 scope.go:117] "RemoveContainer" containerID="ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.526558 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq" event={"ID":"a41b80c5-58ef-4d96-a176-02d0618297ee","Type":"ContainerDied","Data":"4ca4605b7bfe00847e11510ae10a3a3ad08cfbb71d140733665acea3abcefee6"} Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.548470 4836 scope.go:117] "RemoveContainer" containerID="ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb" Feb 17 14:12:38 crc kubenswrapper[4836]: E0217 14:12:38.549175 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb\": container with ID starting with ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb not found: ID does not exist" containerID="ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.549213 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb"} err="failed to get container status \"ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb\": rpc error: code = NotFound desc = could not find container \"ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb\": container with ID starting with ea6b6d4a9dc8e039294cc0c6dd70b4120029dc0a72d0e5936067797486705deb not found: ID does not exist" Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.581602 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq"] Feb 17 14:12:38 crc kubenswrapper[4836]: I0217 14:12:38.585576 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-655d9f6b-9w5qq"] Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.157821 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx"] Feb 17 14:12:39 crc kubenswrapper[4836]: E0217 14:12:39.158339 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41b80c5-58ef-4d96-a176-02d0618297ee" containerName="route-controller-manager" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.158357 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41b80c5-58ef-4d96-a176-02d0618297ee" containerName="route-controller-manager" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.158460 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41b80c5-58ef-4d96-a176-02d0618297ee" containerName="route-controller-manager" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.158818 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.160724 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.160752 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.161457 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.162096 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.162889 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.165739 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.176385 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx"] Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.214320 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21265df2-25f1-466c-b267-95de545523c8-client-ca\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.214383 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21265df2-25f1-466c-b267-95de545523c8-serving-cert\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.214450 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21265df2-25f1-466c-b267-95de545523c8-config\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.214479 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2s75\" (UniqueName: \"kubernetes.io/projected/21265df2-25f1-466c-b267-95de545523c8-kube-api-access-v2s75\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.315125 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21265df2-25f1-466c-b267-95de545523c8-config\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.315174 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s75\" (UniqueName: \"kubernetes.io/projected/21265df2-25f1-466c-b267-95de545523c8-kube-api-access-v2s75\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.315237 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21265df2-25f1-466c-b267-95de545523c8-client-ca\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.315257 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21265df2-25f1-466c-b267-95de545523c8-serving-cert\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.316330 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21265df2-25f1-466c-b267-95de545523c8-client-ca\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.316453 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21265df2-25f1-466c-b267-95de545523c8-config\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.319157 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21265df2-25f1-466c-b267-95de545523c8-serving-cert\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.331717 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2s75\" (UniqueName: \"kubernetes.io/projected/21265df2-25f1-466c-b267-95de545523c8-kube-api-access-v2s75\") pod \"route-controller-manager-6bb9976d8b-b8bzx\" (UID: \"21265df2-25f1-466c-b267-95de545523c8\") " pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.474366 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:39 crc kubenswrapper[4836]: I0217 14:12:39.908512 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx"] Feb 17 14:12:39 crc kubenswrapper[4836]: W0217 14:12:39.913438 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21265df2_25f1_466c_b267_95de545523c8.slice/crio-3c88aa34a5ef7da27231a11ea2ecd16c4bc19e305853972a881e458301462b71 WatchSource:0}: Error finding container 3c88aa34a5ef7da27231a11ea2ecd16c4bc19e305853972a881e458301462b71: Status 404 returned error can't find the container with id 3c88aa34a5ef7da27231a11ea2ecd16c4bc19e305853972a881e458301462b71 Feb 17 14:12:40 crc kubenswrapper[4836]: I0217 14:12:40.538978 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" event={"ID":"21265df2-25f1-466c-b267-95de545523c8","Type":"ContainerStarted","Data":"c7748556e85cb8444b90b10d4dc94cc5ec7aa1761b739525c90fa073be2d9287"} Feb 17 14:12:40 crc kubenswrapper[4836]: I0217 14:12:40.539313 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" event={"ID":"21265df2-25f1-466c-b267-95de545523c8","Type":"ContainerStarted","Data":"3c88aa34a5ef7da27231a11ea2ecd16c4bc19e305853972a881e458301462b71"} Feb 17 14:12:40 crc kubenswrapper[4836]: I0217 14:12:40.539339 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:40 crc kubenswrapper[4836]: I0217 14:12:40.544694 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" Feb 17 14:12:40 crc kubenswrapper[4836]: I0217 14:12:40.556630 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb9976d8b-b8bzx" podStartSLOduration=3.5566019129999997 podStartE2EDuration="3.556601913s" podCreationTimestamp="2026-02-17 14:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:12:40.552736625 +0000 UTC m=+386.895664894" watchObservedRunningTime="2026-02-17 14:12:40.556601913 +0000 UTC m=+386.899530182" Feb 17 14:12:40 crc kubenswrapper[4836]: I0217 14:12:40.574094 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41b80c5-58ef-4d96-a176-02d0618297ee" path="/var/lib/kubelet/pods/a41b80c5-58ef-4d96-a176-02d0618297ee/volumes" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.566259 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xxj4j"] Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.567535 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.569918 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.578422 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xxj4j"] Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.642869 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaecd71b-3b00-427a-9654-9d04af5469b9-catalog-content\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.642993 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qllf\" (UniqueName: \"kubernetes.io/projected/eaecd71b-3b00-427a-9654-9d04af5469b9-kube-api-access-4qllf\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.643155 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaecd71b-3b00-427a-9654-9d04af5469b9-utilities\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.745110 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaecd71b-3b00-427a-9654-9d04af5469b9-utilities\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.745182 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaecd71b-3b00-427a-9654-9d04af5469b9-catalog-content\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.745205 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qllf\" (UniqueName: \"kubernetes.io/projected/eaecd71b-3b00-427a-9654-9d04af5469b9-kube-api-access-4qllf\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.746022 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaecd71b-3b00-427a-9654-9d04af5469b9-utilities\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.746030 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaecd71b-3b00-427a-9654-9d04af5469b9-catalog-content\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.768219 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qllf\" (UniqueName: \"kubernetes.io/projected/eaecd71b-3b00-427a-9654-9d04af5469b9-kube-api-access-4qllf\") pod \"certified-operators-xxj4j\" (UID: \"eaecd71b-3b00-427a-9654-9d04af5469b9\") " pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:41 crc kubenswrapper[4836]: I0217 14:12:41.927781 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.334397 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xxj4j"] Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.551817 4836 generic.go:334] "Generic (PLEG): container finished" podID="eaecd71b-3b00-427a-9654-9d04af5469b9" containerID="aa7f9ce852f645a39beee4170859071aedade9693dce893892a107d8b6ef0aae" exitCode=0 Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.551918 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxj4j" event={"ID":"eaecd71b-3b00-427a-9654-9d04af5469b9","Type":"ContainerDied","Data":"aa7f9ce852f645a39beee4170859071aedade9693dce893892a107d8b6ef0aae"} Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.552234 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxj4j" event={"ID":"eaecd71b-3b00-427a-9654-9d04af5469b9","Type":"ContainerStarted","Data":"9e95b3dc7d55a998957a83e71e517055b8e4593ed57b59207d796a207e2973c1"} Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.578843 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zjxvt"] Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.580040 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.582506 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.585322 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zjxvt"] Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.655767 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212802dd-4c4f-444a-b443-bc3bbd1431bc-utilities\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.655816 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxt6l\" (UniqueName: \"kubernetes.io/projected/212802dd-4c4f-444a-b443-bc3bbd1431bc-kube-api-access-wxt6l\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.655844 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212802dd-4c4f-444a-b443-bc3bbd1431bc-catalog-content\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.757384 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212802dd-4c4f-444a-b443-bc3bbd1431bc-catalog-content\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.757499 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212802dd-4c4f-444a-b443-bc3bbd1431bc-utilities\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.757522 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxt6l\" (UniqueName: \"kubernetes.io/projected/212802dd-4c4f-444a-b443-bc3bbd1431bc-kube-api-access-wxt6l\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.758269 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212802dd-4c4f-444a-b443-bc3bbd1431bc-catalog-content\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.758597 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212802dd-4c4f-444a-b443-bc3bbd1431bc-utilities\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.778480 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxt6l\" (UniqueName: \"kubernetes.io/projected/212802dd-4c4f-444a-b443-bc3bbd1431bc-kube-api-access-wxt6l\") pod \"community-operators-zjxvt\" (UID: \"212802dd-4c4f-444a-b443-bc3bbd1431bc\") " pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:42 crc kubenswrapper[4836]: I0217 14:12:42.929851 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.357131 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zjxvt"] Feb 17 14:12:43 crc kubenswrapper[4836]: W0217 14:12:43.363633 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212802dd_4c4f_444a_b443_bc3bbd1431bc.slice/crio-02fb12ea464e187c38e849c376aa4db9e9c06deb2e23fabe64865bbea9e1ddbd WatchSource:0}: Error finding container 02fb12ea464e187c38e849c376aa4db9e9c06deb2e23fabe64865bbea9e1ddbd: Status 404 returned error can't find the container with id 02fb12ea464e187c38e849c376aa4db9e9c06deb2e23fabe64865bbea9e1ddbd Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.562548 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxj4j" event={"ID":"eaecd71b-3b00-427a-9654-9d04af5469b9","Type":"ContainerStarted","Data":"21a01e036ae8bc2d46ea0e432083ed8e2bc2fae27eab393688cc5b04a75de90c"} Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.563966 4836 generic.go:334] "Generic (PLEG): container finished" podID="212802dd-4c4f-444a-b443-bc3bbd1431bc" containerID="d9ae08d204abbfd6e235974dfd87d8cc8bc8ecedd8f26f0aed33b4f099c1a758" exitCode=0 Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.564060 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjxvt" event={"ID":"212802dd-4c4f-444a-b443-bc3bbd1431bc","Type":"ContainerDied","Data":"d9ae08d204abbfd6e235974dfd87d8cc8bc8ecedd8f26f0aed33b4f099c1a758"} Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.564092 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjxvt" event={"ID":"212802dd-4c4f-444a-b443-bc3bbd1431bc","Type":"ContainerStarted","Data":"02fb12ea464e187c38e849c376aa4db9e9c06deb2e23fabe64865bbea9e1ddbd"} Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.964161 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8gtc9"] Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.965689 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.967812 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 14:12:43 crc kubenswrapper[4836]: I0217 14:12:43.977577 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gtc9"] Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.073924 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb3c078-0953-4561-a532-cc25ff32d845-utilities\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.073977 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27sqg\" (UniqueName: \"kubernetes.io/projected/8fb3c078-0953-4561-a532-cc25ff32d845-kube-api-access-27sqg\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.074105 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb3c078-0953-4561-a532-cc25ff32d845-catalog-content\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.175259 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb3c078-0953-4561-a532-cc25ff32d845-catalog-content\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.175370 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb3c078-0953-4561-a532-cc25ff32d845-utilities\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.175492 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27sqg\" (UniqueName: \"kubernetes.io/projected/8fb3c078-0953-4561-a532-cc25ff32d845-kube-api-access-27sqg\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.175888 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb3c078-0953-4561-a532-cc25ff32d845-catalog-content\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.175959 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb3c078-0953-4561-a532-cc25ff32d845-utilities\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.196230 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27sqg\" (UniqueName: \"kubernetes.io/projected/8fb3c078-0953-4561-a532-cc25ff32d845-kube-api-access-27sqg\") pod \"redhat-marketplace-8gtc9\" (UID: \"8fb3c078-0953-4561-a532-cc25ff32d845\") " pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.284018 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.585103 4836 generic.go:334] "Generic (PLEG): container finished" podID="eaecd71b-3b00-427a-9654-9d04af5469b9" containerID="21a01e036ae8bc2d46ea0e432083ed8e2bc2fae27eab393688cc5b04a75de90c" exitCode=0 Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.590239 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxj4j" event={"ID":"eaecd71b-3b00-427a-9654-9d04af5469b9","Type":"ContainerDied","Data":"21a01e036ae8bc2d46ea0e432083ed8e2bc2fae27eab393688cc5b04a75de90c"} Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.590314 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjxvt" event={"ID":"212802dd-4c4f-444a-b443-bc3bbd1431bc","Type":"ContainerStarted","Data":"57121581c3ae7e47488b89b50164c62707ba95ae0cdbe02ba6f16d945bcfc23e"} Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.713652 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gtc9"] Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.961211 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-89b2r"] Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.962651 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.965224 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 14:12:44 crc kubenswrapper[4836]: I0217 14:12:44.977978 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-89b2r"] Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.085715 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-utilities\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.085781 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97574\" (UniqueName: \"kubernetes.io/projected/cc99d806-e359-4577-8a61-1b527af8779f-kube-api-access-97574\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.085823 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-catalog-content\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.187000 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97574\" (UniqueName: \"kubernetes.io/projected/cc99d806-e359-4577-8a61-1b527af8779f-kube-api-access-97574\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.187091 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-catalog-content\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.187133 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-utilities\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.187624 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-utilities\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.187771 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-catalog-content\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.206273 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97574\" (UniqueName: \"kubernetes.io/projected/cc99d806-e359-4577-8a61-1b527af8779f-kube-api-access-97574\") pod \"redhat-operators-89b2r\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.290576 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.595370 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxj4j" event={"ID":"eaecd71b-3b00-427a-9654-9d04af5469b9","Type":"ContainerStarted","Data":"c37fc7ad8678f0bd7ef3cfd0b3f911e8100a176cad637e5e7823983505fa0d4f"} Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.597083 4836 generic.go:334] "Generic (PLEG): container finished" podID="8fb3c078-0953-4561-a532-cc25ff32d845" containerID="eafccfb016bdc7ac4ddee448c0db9934c540fe9db8c38a30073338c2456bb37b" exitCode=0 Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.597173 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gtc9" event={"ID":"8fb3c078-0953-4561-a532-cc25ff32d845","Type":"ContainerDied","Data":"eafccfb016bdc7ac4ddee448c0db9934c540fe9db8c38a30073338c2456bb37b"} Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.597216 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gtc9" event={"ID":"8fb3c078-0953-4561-a532-cc25ff32d845","Type":"ContainerStarted","Data":"cf63cc751376ca9c7e7c896ffb7d4daf46e0a826afa0f6eb466c4e72cfccd5e7"} Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.598925 4836 generic.go:334] "Generic (PLEG): container finished" podID="212802dd-4c4f-444a-b443-bc3bbd1431bc" containerID="57121581c3ae7e47488b89b50164c62707ba95ae0cdbe02ba6f16d945bcfc23e" exitCode=0 Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.598961 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjxvt" event={"ID":"212802dd-4c4f-444a-b443-bc3bbd1431bc","Type":"ContainerDied","Data":"57121581c3ae7e47488b89b50164c62707ba95ae0cdbe02ba6f16d945bcfc23e"} Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.661890 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xxj4j" podStartSLOduration=2.214965978 podStartE2EDuration="4.661862618s" podCreationTimestamp="2026-02-17 14:12:41 +0000 UTC" firstStartedPulling="2026-02-17 14:12:42.555270447 +0000 UTC m=+388.898198736" lastFinishedPulling="2026-02-17 14:12:45.002167107 +0000 UTC m=+391.345095376" observedRunningTime="2026-02-17 14:12:45.626387444 +0000 UTC m=+391.969315733" watchObservedRunningTime="2026-02-17 14:12:45.661862618 +0000 UTC m=+392.004790887" Feb 17 14:12:45 crc kubenswrapper[4836]: I0217 14:12:45.713474 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-89b2r"] Feb 17 14:12:45 crc kubenswrapper[4836]: W0217 14:12:45.718777 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc99d806_e359_4577_8a61_1b527af8779f.slice/crio-7917a2258074c4b89c2b9c207136528b694ef3fdf3891f163bd96f2105f7c9c7 WatchSource:0}: Error finding container 7917a2258074c4b89c2b9c207136528b694ef3fdf3891f163bd96f2105f7c9c7: Status 404 returned error can't find the container with id 7917a2258074c4b89c2b9c207136528b694ef3fdf3891f163bd96f2105f7c9c7 Feb 17 14:12:46 crc kubenswrapper[4836]: I0217 14:12:46.608318 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjxvt" event={"ID":"212802dd-4c4f-444a-b443-bc3bbd1431bc","Type":"ContainerStarted","Data":"100f1474b576f564309897e2bb61f7a8e5947070d42242a0989d63c32f072200"} Feb 17 14:12:46 crc kubenswrapper[4836]: I0217 14:12:46.610711 4836 generic.go:334] "Generic (PLEG): container finished" podID="cc99d806-e359-4577-8a61-1b527af8779f" containerID="fa952a578ab7d74e43550d2abf42e1871632978ec68916c0a6508b2ed82226f0" exitCode=0 Feb 17 14:12:46 crc kubenswrapper[4836]: I0217 14:12:46.610805 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89b2r" event={"ID":"cc99d806-e359-4577-8a61-1b527af8779f","Type":"ContainerDied","Data":"fa952a578ab7d74e43550d2abf42e1871632978ec68916c0a6508b2ed82226f0"} Feb 17 14:12:46 crc kubenswrapper[4836]: I0217 14:12:46.610844 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89b2r" event={"ID":"cc99d806-e359-4577-8a61-1b527af8779f","Type":"ContainerStarted","Data":"7917a2258074c4b89c2b9c207136528b694ef3fdf3891f163bd96f2105f7c9c7"} Feb 17 14:12:46 crc kubenswrapper[4836]: I0217 14:12:46.612609 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gtc9" event={"ID":"8fb3c078-0953-4561-a532-cc25ff32d845","Type":"ContainerStarted","Data":"93383f2b574dbb3dd74a694937e4f11376857c8902e53ddc5cef1e5d40402788"} Feb 17 14:12:46 crc kubenswrapper[4836]: I0217 14:12:46.633704 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zjxvt" podStartSLOduration=2.123615716 podStartE2EDuration="4.633667797s" podCreationTimestamp="2026-02-17 14:12:42 +0000 UTC" firstStartedPulling="2026-02-17 14:12:43.565667082 +0000 UTC m=+389.908595351" lastFinishedPulling="2026-02-17 14:12:46.075719163 +0000 UTC m=+392.418647432" observedRunningTime="2026-02-17 14:12:46.627228493 +0000 UTC m=+392.970156772" watchObservedRunningTime="2026-02-17 14:12:46.633667797 +0000 UTC m=+392.976596076" Feb 17 14:12:47 crc kubenswrapper[4836]: I0217 14:12:47.619923 4836 generic.go:334] "Generic (PLEG): container finished" podID="8fb3c078-0953-4561-a532-cc25ff32d845" containerID="93383f2b574dbb3dd74a694937e4f11376857c8902e53ddc5cef1e5d40402788" exitCode=0 Feb 17 14:12:47 crc kubenswrapper[4836]: I0217 14:12:47.620035 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gtc9" event={"ID":"8fb3c078-0953-4561-a532-cc25ff32d845","Type":"ContainerDied","Data":"93383f2b574dbb3dd74a694937e4f11376857c8902e53ddc5cef1e5d40402788"} Feb 17 14:12:48 crc kubenswrapper[4836]: I0217 14:12:48.627078 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89b2r" event={"ID":"cc99d806-e359-4577-8a61-1b527af8779f","Type":"ContainerStarted","Data":"99c757b68ed859a793668b56d22b853641589be9aa542f670159f298a8c5ffcd"} Feb 17 14:12:48 crc kubenswrapper[4836]: I0217 14:12:48.629623 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gtc9" event={"ID":"8fb3c078-0953-4561-a532-cc25ff32d845","Type":"ContainerStarted","Data":"389c3d1c8aeeb9b0460e0bedb15b9496356ba81bdfc15eae0bdea17e125e5949"} Feb 17 14:12:48 crc kubenswrapper[4836]: I0217 14:12:48.690846 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8gtc9" podStartSLOduration=3.023952167 podStartE2EDuration="5.690824268s" podCreationTimestamp="2026-02-17 14:12:43 +0000 UTC" firstStartedPulling="2026-02-17 14:12:45.59914713 +0000 UTC m=+391.942075409" lastFinishedPulling="2026-02-17 14:12:48.266019241 +0000 UTC m=+394.608947510" observedRunningTime="2026-02-17 14:12:48.687933136 +0000 UTC m=+395.030861425" watchObservedRunningTime="2026-02-17 14:12:48.690824268 +0000 UTC m=+395.033752537" Feb 17 14:12:50 crc kubenswrapper[4836]: I0217 14:12:50.643069 4836 generic.go:334] "Generic (PLEG): container finished" podID="cc99d806-e359-4577-8a61-1b527af8779f" containerID="99c757b68ed859a793668b56d22b853641589be9aa542f670159f298a8c5ffcd" exitCode=0 Feb 17 14:12:50 crc kubenswrapper[4836]: I0217 14:12:50.643169 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89b2r" event={"ID":"cc99d806-e359-4577-8a61-1b527af8779f","Type":"ContainerDied","Data":"99c757b68ed859a793668b56d22b853641589be9aa542f670159f298a8c5ffcd"} Feb 17 14:12:51 crc kubenswrapper[4836]: I0217 14:12:51.655113 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89b2r" event={"ID":"cc99d806-e359-4577-8a61-1b527af8779f","Type":"ContainerStarted","Data":"b84dd65de54881081222d1401d684becd3ab6f396a5d3ddb1a10e413f4f858e0"} Feb 17 14:12:51 crc kubenswrapper[4836]: I0217 14:12:51.683314 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-89b2r" podStartSLOduration=2.879315975 podStartE2EDuration="7.683272681s" podCreationTimestamp="2026-02-17 14:12:44 +0000 UTC" firstStartedPulling="2026-02-17 14:12:46.612246986 +0000 UTC m=+392.955175255" lastFinishedPulling="2026-02-17 14:12:51.416203692 +0000 UTC m=+397.759131961" observedRunningTime="2026-02-17 14:12:51.682492488 +0000 UTC m=+398.025420787" watchObservedRunningTime="2026-02-17 14:12:51.683272681 +0000 UTC m=+398.026200970" Feb 17 14:12:51 crc kubenswrapper[4836]: I0217 14:12:51.928042 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:51 crc kubenswrapper[4836]: I0217 14:12:51.928458 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:51 crc kubenswrapper[4836]: I0217 14:12:51.993070 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:52 crc kubenswrapper[4836]: I0217 14:12:52.702053 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xxj4j" Feb 17 14:12:52 crc kubenswrapper[4836]: I0217 14:12:52.930335 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:52 crc kubenswrapper[4836]: I0217 14:12:52.930408 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:52 crc kubenswrapper[4836]: I0217 14:12:52.988305 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:53 crc kubenswrapper[4836]: I0217 14:12:53.716776 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zjxvt" Feb 17 14:12:54 crc kubenswrapper[4836]: I0217 14:12:54.284870 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:54 crc kubenswrapper[4836]: I0217 14:12:54.284935 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:54 crc kubenswrapper[4836]: I0217 14:12:54.367563 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:54 crc kubenswrapper[4836]: I0217 14:12:54.720558 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8gtc9" Feb 17 14:12:55 crc kubenswrapper[4836]: I0217 14:12:55.291520 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:55 crc kubenswrapper[4836]: I0217 14:12:55.293201 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:12:56 crc kubenswrapper[4836]: I0217 14:12:56.328794 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-89b2r" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="registry-server" probeResult="failure" output=< Feb 17 14:12:56 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:12:56 crc kubenswrapper[4836]: > Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.176453 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5qw6z"] Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.177240 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.188263 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5qw6z"] Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.279284 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.279614 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-registry-tls\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.279637 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbed41f2-8f89-4e10-a73e-9c44df59b13b-trusted-ca\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.279667 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbed41f2-8f89-4e10-a73e-9c44df59b13b-registry-certificates\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.279694 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbed41f2-8f89-4e10-a73e-9c44df59b13b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.279710 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-bound-sa-token\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.279881 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzw72\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-kube-api-access-zzw72\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.279911 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbed41f2-8f89-4e10-a73e-9c44df59b13b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.306258 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.380503 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbed41f2-8f89-4e10-a73e-9c44df59b13b-trusted-ca\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.381510 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-registry-tls\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.381643 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbed41f2-8f89-4e10-a73e-9c44df59b13b-registry-certificates\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.381672 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbed41f2-8f89-4e10-a73e-9c44df59b13b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.381689 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-bound-sa-token\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.382452 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbed41f2-8f89-4e10-a73e-9c44df59b13b-trusted-ca\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.382605 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzw72\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-kube-api-access-zzw72\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.382718 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbed41f2-8f89-4e10-a73e-9c44df59b13b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.383143 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbed41f2-8f89-4e10-a73e-9c44df59b13b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.383234 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbed41f2-8f89-4e10-a73e-9c44df59b13b-registry-certificates\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.387324 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-registry-tls\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.401709 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbed41f2-8f89-4e10-a73e-9c44df59b13b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.405003 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-bound-sa-token\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.409214 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzw72\" (UniqueName: \"kubernetes.io/projected/dbed41f2-8f89-4e10-a73e-9c44df59b13b-kube-api-access-zzw72\") pod \"image-registry-66df7c8f76-5qw6z\" (UID: \"dbed41f2-8f89-4e10-a73e-9c44df59b13b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.501849 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.825979 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85f58dddc7-nc7rr"] Feb 17 14:12:57 crc kubenswrapper[4836]: I0217 14:12:57.826748 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" podUID="3135ca20-3162-4278-bbd7-de1d6f977dfe" containerName="controller-manager" containerID="cri-o://7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a" gracePeriod=30 Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.228772 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5qw6z"] Feb 17 14:12:58 crc kubenswrapper[4836]: W0217 14:12:58.243160 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbed41f2_8f89_4e10_a73e_9c44df59b13b.slice/crio-29146d415312ad83deb04838fc6884623244cd4c1654ce771355c8d0555a4b15 WatchSource:0}: Error finding container 29146d415312ad83deb04838fc6884623244cd4c1654ce771355c8d0555a4b15: Status 404 returned error can't find the container with id 29146d415312ad83deb04838fc6884623244cd4c1654ce771355c8d0555a4b15 Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.255380 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.300078 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdjgk\" (UniqueName: \"kubernetes.io/projected/3135ca20-3162-4278-bbd7-de1d6f977dfe-kube-api-access-hdjgk\") pod \"3135ca20-3162-4278-bbd7-de1d6f977dfe\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.300331 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-client-ca\") pod \"3135ca20-3162-4278-bbd7-de1d6f977dfe\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.300470 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-proxy-ca-bundles\") pod \"3135ca20-3162-4278-bbd7-de1d6f977dfe\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.300558 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-config\") pod \"3135ca20-3162-4278-bbd7-de1d6f977dfe\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.300640 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3135ca20-3162-4278-bbd7-de1d6f977dfe-serving-cert\") pod \"3135ca20-3162-4278-bbd7-de1d6f977dfe\" (UID: \"3135ca20-3162-4278-bbd7-de1d6f977dfe\") " Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.301317 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-client-ca" (OuterVolumeSpecName: "client-ca") pod "3135ca20-3162-4278-bbd7-de1d6f977dfe" (UID: "3135ca20-3162-4278-bbd7-de1d6f977dfe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.301399 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3135ca20-3162-4278-bbd7-de1d6f977dfe" (UID: "3135ca20-3162-4278-bbd7-de1d6f977dfe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.301777 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-config" (OuterVolumeSpecName: "config") pod "3135ca20-3162-4278-bbd7-de1d6f977dfe" (UID: "3135ca20-3162-4278-bbd7-de1d6f977dfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.312888 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3135ca20-3162-4278-bbd7-de1d6f977dfe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3135ca20-3162-4278-bbd7-de1d6f977dfe" (UID: "3135ca20-3162-4278-bbd7-de1d6f977dfe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.313597 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3135ca20-3162-4278-bbd7-de1d6f977dfe-kube-api-access-hdjgk" (OuterVolumeSpecName: "kube-api-access-hdjgk") pod "3135ca20-3162-4278-bbd7-de1d6f977dfe" (UID: "3135ca20-3162-4278-bbd7-de1d6f977dfe"). InnerVolumeSpecName "kube-api-access-hdjgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.402798 4836 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3135ca20-3162-4278-bbd7-de1d6f977dfe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.402851 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdjgk\" (UniqueName: \"kubernetes.io/projected/3135ca20-3162-4278-bbd7-de1d6f977dfe-kube-api-access-hdjgk\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.402862 4836 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.402872 4836 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.402884 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3135ca20-3162-4278-bbd7-de1d6f977dfe-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.712704 4836 generic.go:334] "Generic (PLEG): container finished" podID="3135ca20-3162-4278-bbd7-de1d6f977dfe" containerID="7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a" exitCode=0 Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.712848 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" event={"ID":"3135ca20-3162-4278-bbd7-de1d6f977dfe","Type":"ContainerDied","Data":"7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a"} Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.713117 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" event={"ID":"3135ca20-3162-4278-bbd7-de1d6f977dfe","Type":"ContainerDied","Data":"7e2281e26a53d8122732046bbdbf7cffb006b03fc9fd9923b35c064e85e5470c"} Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.713007 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85f58dddc7-nc7rr" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.713143 4836 scope.go:117] "RemoveContainer" containerID="7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.715752 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" event={"ID":"dbed41f2-8f89-4e10-a73e-9c44df59b13b","Type":"ContainerStarted","Data":"236e86df2c880f2555debb95a06cfc97b9c450df455c00d9ff6b005211b9594a"} Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.715805 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" event={"ID":"dbed41f2-8f89-4e10-a73e-9c44df59b13b","Type":"ContainerStarted","Data":"29146d415312ad83deb04838fc6884623244cd4c1654ce771355c8d0555a4b15"} Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.715998 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.734943 4836 scope.go:117] "RemoveContainer" containerID="7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a" Feb 17 14:12:58 crc kubenswrapper[4836]: E0217 14:12:58.735752 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a\": container with ID starting with 7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a not found: ID does not exist" containerID="7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.735814 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a"} err="failed to get container status \"7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a\": rpc error: code = NotFound desc = could not find container \"7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a\": container with ID starting with 7563192babedf15dbfd2b3db7bca7b850f62a2b88195271072c4dd96129be51a not found: ID does not exist" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.740269 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" podStartSLOduration=1.740221024 podStartE2EDuration="1.740221024s" podCreationTimestamp="2026-02-17 14:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:12:58.737672241 +0000 UTC m=+405.080600530" watchObservedRunningTime="2026-02-17 14:12:58.740221024 +0000 UTC m=+405.083149303" Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.752868 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85f58dddc7-nc7rr"] Feb 17 14:12:58 crc kubenswrapper[4836]: I0217 14:12:58.766627 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85f58dddc7-nc7rr"] Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.175754 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69d6c474bd-jspw5"] Feb 17 14:12:59 crc kubenswrapper[4836]: E0217 14:12:59.176388 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3135ca20-3162-4278-bbd7-de1d6f977dfe" containerName="controller-manager" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.176478 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="3135ca20-3162-4278-bbd7-de1d6f977dfe" containerName="controller-manager" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.176870 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="3135ca20-3162-4278-bbd7-de1d6f977dfe" containerName="controller-manager" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.177708 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.180900 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.181217 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.184254 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.189266 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.189777 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.190071 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.190254 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.191522 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69d6c474bd-jspw5"] Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.313412 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-config\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.313484 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8tm\" (UniqueName: \"kubernetes.io/projected/f772d120-59e6-4232-ada8-751b59262fc5-kube-api-access-2d8tm\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.313515 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f772d120-59e6-4232-ada8-751b59262fc5-serving-cert\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.313540 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-client-ca\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.313567 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-proxy-ca-bundles\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.414408 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-config\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.414503 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8tm\" (UniqueName: \"kubernetes.io/projected/f772d120-59e6-4232-ada8-751b59262fc5-kube-api-access-2d8tm\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.414538 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f772d120-59e6-4232-ada8-751b59262fc5-serving-cert\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.414567 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-client-ca\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.414593 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-proxy-ca-bundles\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.416047 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-client-ca\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.416959 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-config\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.417024 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f772d120-59e6-4232-ada8-751b59262fc5-proxy-ca-bundles\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.426509 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f772d120-59e6-4232-ada8-751b59262fc5-serving-cert\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.436767 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8tm\" (UniqueName: \"kubernetes.io/projected/f772d120-59e6-4232-ada8-751b59262fc5-kube-api-access-2d8tm\") pod \"controller-manager-69d6c474bd-jspw5\" (UID: \"f772d120-59e6-4232-ada8-751b59262fc5\") " pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.497034 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.765207 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.765678 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:12:59 crc kubenswrapper[4836]: I0217 14:12:59.962334 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69d6c474bd-jspw5"] Feb 17 14:13:00 crc kubenswrapper[4836]: I0217 14:13:00.577145 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3135ca20-3162-4278-bbd7-de1d6f977dfe" path="/var/lib/kubelet/pods/3135ca20-3162-4278-bbd7-de1d6f977dfe/volumes" Feb 17 14:13:00 crc kubenswrapper[4836]: I0217 14:13:00.736891 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" event={"ID":"f772d120-59e6-4232-ada8-751b59262fc5","Type":"ContainerStarted","Data":"ff24011989436c878dcb7ddeca2877a0ab55a69759535b681dce4057da5174cc"} Feb 17 14:13:00 crc kubenswrapper[4836]: I0217 14:13:00.736963 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" event={"ID":"f772d120-59e6-4232-ada8-751b59262fc5","Type":"ContainerStarted","Data":"8227ce45a3cb16dc92ac913a96d2d14ccdb873a020d0c2dfa26dd3a64bf600d1"} Feb 17 14:13:00 crc kubenswrapper[4836]: I0217 14:13:00.737144 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:13:00 crc kubenswrapper[4836]: I0217 14:13:00.742862 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" Feb 17 14:13:00 crc kubenswrapper[4836]: I0217 14:13:00.760644 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69d6c474bd-jspw5" podStartSLOduration=3.760621297 podStartE2EDuration="3.760621297s" podCreationTimestamp="2026-02-17 14:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:13:00.757345873 +0000 UTC m=+407.100274162" watchObservedRunningTime="2026-02-17 14:13:00.760621297 +0000 UTC m=+407.103549576" Feb 17 14:13:05 crc kubenswrapper[4836]: I0217 14:13:05.335922 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:13:05 crc kubenswrapper[4836]: I0217 14:13:05.390944 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:13:17 crc kubenswrapper[4836]: I0217 14:13:17.508523 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5qw6z" Feb 17 14:13:17 crc kubenswrapper[4836]: I0217 14:13:17.584401 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5vhz9"] Feb 17 14:13:29 crc kubenswrapper[4836]: I0217 14:13:29.765577 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:13:29 crc kubenswrapper[4836]: I0217 14:13:29.766378 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:13:29 crc kubenswrapper[4836]: I0217 14:13:29.766446 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:13:29 crc kubenswrapper[4836]: I0217 14:13:29.767375 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ca471c2a83c51e21c02e6df84d64c6720d133c689bc0501ece1848cccb37b3b"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:13:29 crc kubenswrapper[4836]: I0217 14:13:29.767446 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://6ca471c2a83c51e21c02e6df84d64c6720d133c689bc0501ece1848cccb37b3b" gracePeriod=600 Feb 17 14:13:30 crc kubenswrapper[4836]: I0217 14:13:30.059278 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="6ca471c2a83c51e21c02e6df84d64c6720d133c689bc0501ece1848cccb37b3b" exitCode=0 Feb 17 14:13:30 crc kubenswrapper[4836]: I0217 14:13:30.059346 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"6ca471c2a83c51e21c02e6df84d64c6720d133c689bc0501ece1848cccb37b3b"} Feb 17 14:13:30 crc kubenswrapper[4836]: I0217 14:13:30.059854 4836 scope.go:117] "RemoveContainer" containerID="c10f7b69881ea75bfa81905a42379fed8daf48e788b5f2787c522a52d28f58cb" Feb 17 14:13:31 crc kubenswrapper[4836]: I0217 14:13:31.067820 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"1b2a0d64ec4a5faa95e6312a8de2b21c8f3e85f4d851c39760904a4b16753249"} Feb 17 14:13:42 crc kubenswrapper[4836]: I0217 14:13:42.636616 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" podUID="4cd3f585-c95f-43ee-962c-ea33aff90415" containerName="registry" containerID="cri-o://bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6" gracePeriod=30 Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.091343 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.138326 4836 generic.go:334] "Generic (PLEG): container finished" podID="4cd3f585-c95f-43ee-962c-ea33aff90415" containerID="bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6" exitCode=0 Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.138376 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" event={"ID":"4cd3f585-c95f-43ee-962c-ea33aff90415","Type":"ContainerDied","Data":"bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6"} Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.138393 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.138406 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5vhz9" event={"ID":"4cd3f585-c95f-43ee-962c-ea33aff90415","Type":"ContainerDied","Data":"b92bf709add22f9c57e92a26debc7c9604b5ddd76791fbcef0b8821c381eba8e"} Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.138429 4836 scope.go:117] "RemoveContainer" containerID="bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.161656 4836 scope.go:117] "RemoveContainer" containerID="bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6" Feb 17 14:13:43 crc kubenswrapper[4836]: E0217 14:13:43.162546 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6\": container with ID starting with bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6 not found: ID does not exist" containerID="bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.162602 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6"} err="failed to get container status \"bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6\": rpc error: code = NotFound desc = could not find container \"bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6\": container with ID starting with bdf9c0182351aaefe8f64ad91ea1c51a4b7acfffd267691001139a7e914dc3b6 not found: ID does not exist" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.243834 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4cd3f585-c95f-43ee-962c-ea33aff90415-ca-trust-extracted\") pod \"4cd3f585-c95f-43ee-962c-ea33aff90415\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.243952 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhp9d\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-kube-api-access-vhp9d\") pod \"4cd3f585-c95f-43ee-962c-ea33aff90415\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.243980 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-tls\") pod \"4cd3f585-c95f-43ee-962c-ea33aff90415\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.244189 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4cd3f585-c95f-43ee-962c-ea33aff90415\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.244235 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-trusted-ca\") pod \"4cd3f585-c95f-43ee-962c-ea33aff90415\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.244269 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-certificates\") pod \"4cd3f585-c95f-43ee-962c-ea33aff90415\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.244289 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4cd3f585-c95f-43ee-962c-ea33aff90415-installation-pull-secrets\") pod \"4cd3f585-c95f-43ee-962c-ea33aff90415\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.244425 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-bound-sa-token\") pod \"4cd3f585-c95f-43ee-962c-ea33aff90415\" (UID: \"4cd3f585-c95f-43ee-962c-ea33aff90415\") " Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.245589 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4cd3f585-c95f-43ee-962c-ea33aff90415" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.246188 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4cd3f585-c95f-43ee-962c-ea33aff90415" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.247027 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.247376 4836 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.252806 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-kube-api-access-vhp9d" (OuterVolumeSpecName: "kube-api-access-vhp9d") pod "4cd3f585-c95f-43ee-962c-ea33aff90415" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415"). InnerVolumeSpecName "kube-api-access-vhp9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.252916 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd3f585-c95f-43ee-962c-ea33aff90415-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4cd3f585-c95f-43ee-962c-ea33aff90415" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.254174 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4cd3f585-c95f-43ee-962c-ea33aff90415" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.254669 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4cd3f585-c95f-43ee-962c-ea33aff90415" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.258383 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4cd3f585-c95f-43ee-962c-ea33aff90415" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.264822 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd3f585-c95f-43ee-962c-ea33aff90415-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4cd3f585-c95f-43ee-962c-ea33aff90415" (UID: "4cd3f585-c95f-43ee-962c-ea33aff90415"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.348611 4836 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.348671 4836 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4cd3f585-c95f-43ee-962c-ea33aff90415-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.348686 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhp9d\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-kube-api-access-vhp9d\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.348703 4836 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4cd3f585-c95f-43ee-962c-ea33aff90415-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.348716 4836 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4cd3f585-c95f-43ee-962c-ea33aff90415-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.474722 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5vhz9"] Feb 17 14:13:43 crc kubenswrapper[4836]: I0217 14:13:43.485712 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5vhz9"] Feb 17 14:13:44 crc kubenswrapper[4836]: I0217 14:13:44.577764 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd3f585-c95f-43ee-962c-ea33aff90415" path="/var/lib/kubelet/pods/4cd3f585-c95f-43ee-962c-ea33aff90415/volumes" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.185491 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf"] Feb 17 14:15:00 crc kubenswrapper[4836]: E0217 14:15:00.186398 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd3f585-c95f-43ee-962c-ea33aff90415" containerName="registry" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.186412 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd3f585-c95f-43ee-962c-ea33aff90415" containerName="registry" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.186530 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd3f585-c95f-43ee-962c-ea33aff90415" containerName="registry" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.186957 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.189601 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.189755 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.206763 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf"] Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.293194 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a8fa91-916f-4b01-bf45-63e0add01572-secret-volume\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.293425 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r44dg\" (UniqueName: \"kubernetes.io/projected/18a8fa91-916f-4b01-bf45-63e0add01572-kube-api-access-r44dg\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.293463 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a8fa91-916f-4b01-bf45-63e0add01572-config-volume\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.394849 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r44dg\" (UniqueName: \"kubernetes.io/projected/18a8fa91-916f-4b01-bf45-63e0add01572-kube-api-access-r44dg\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.394899 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a8fa91-916f-4b01-bf45-63e0add01572-config-volume\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.394954 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a8fa91-916f-4b01-bf45-63e0add01572-secret-volume\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.395828 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a8fa91-916f-4b01-bf45-63e0add01572-config-volume\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.400883 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a8fa91-916f-4b01-bf45-63e0add01572-secret-volume\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.411055 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r44dg\" (UniqueName: \"kubernetes.io/projected/18a8fa91-916f-4b01-bf45-63e0add01572-kube-api-access-r44dg\") pod \"collect-profiles-29522295-87mmf\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.507903 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:00 crc kubenswrapper[4836]: I0217 14:15:00.931086 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf"] Feb 17 14:15:01 crc kubenswrapper[4836]: I0217 14:15:01.730601 4836 generic.go:334] "Generic (PLEG): container finished" podID="18a8fa91-916f-4b01-bf45-63e0add01572" containerID="43a52fe036affd4e3617ffdb7972a0968f159b10d3199b68e41003595bd9384d" exitCode=0 Feb 17 14:15:01 crc kubenswrapper[4836]: I0217 14:15:01.730788 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" event={"ID":"18a8fa91-916f-4b01-bf45-63e0add01572","Type":"ContainerDied","Data":"43a52fe036affd4e3617ffdb7972a0968f159b10d3199b68e41003595bd9384d"} Feb 17 14:15:01 crc kubenswrapper[4836]: I0217 14:15:01.731405 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" event={"ID":"18a8fa91-916f-4b01-bf45-63e0add01572","Type":"ContainerStarted","Data":"062c0bac4496dd7efecca69afab7a8c68666e9d2123b26903c1db4b26c6d0114"} Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.001091 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.165431 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a8fa91-916f-4b01-bf45-63e0add01572-secret-volume\") pod \"18a8fa91-916f-4b01-bf45-63e0add01572\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.165597 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r44dg\" (UniqueName: \"kubernetes.io/projected/18a8fa91-916f-4b01-bf45-63e0add01572-kube-api-access-r44dg\") pod \"18a8fa91-916f-4b01-bf45-63e0add01572\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.165626 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a8fa91-916f-4b01-bf45-63e0add01572-config-volume\") pod \"18a8fa91-916f-4b01-bf45-63e0add01572\" (UID: \"18a8fa91-916f-4b01-bf45-63e0add01572\") " Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.166542 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a8fa91-916f-4b01-bf45-63e0add01572-config-volume" (OuterVolumeSpecName: "config-volume") pod "18a8fa91-916f-4b01-bf45-63e0add01572" (UID: "18a8fa91-916f-4b01-bf45-63e0add01572"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.172246 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a8fa91-916f-4b01-bf45-63e0add01572-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18a8fa91-916f-4b01-bf45-63e0add01572" (UID: "18a8fa91-916f-4b01-bf45-63e0add01572"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.172353 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a8fa91-916f-4b01-bf45-63e0add01572-kube-api-access-r44dg" (OuterVolumeSpecName: "kube-api-access-r44dg") pod "18a8fa91-916f-4b01-bf45-63e0add01572" (UID: "18a8fa91-916f-4b01-bf45-63e0add01572"). InnerVolumeSpecName "kube-api-access-r44dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.267093 4836 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a8fa91-916f-4b01-bf45-63e0add01572-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.267153 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r44dg\" (UniqueName: \"kubernetes.io/projected/18a8fa91-916f-4b01-bf45-63e0add01572-kube-api-access-r44dg\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.267168 4836 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a8fa91-916f-4b01-bf45-63e0add01572-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.746378 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" event={"ID":"18a8fa91-916f-4b01-bf45-63e0add01572","Type":"ContainerDied","Data":"062c0bac4496dd7efecca69afab7a8c68666e9d2123b26903c1db4b26c6d0114"} Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.746816 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062c0bac4496dd7efecca69afab7a8c68666e9d2123b26903c1db4b26c6d0114" Feb 17 14:15:03 crc kubenswrapper[4836]: I0217 14:15:03.746683 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-87mmf" Feb 17 14:15:14 crc kubenswrapper[4836]: I0217 14:15:14.870506 4836 scope.go:117] "RemoveContainer" containerID="f5f1510b84a48fd765ca27386941284d20f6da0225cb6c655223588a86aa6f8f" Feb 17 14:15:59 crc kubenswrapper[4836]: I0217 14:15:59.765216 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:15:59 crc kubenswrapper[4836]: I0217 14:15:59.765848 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:16:29 crc kubenswrapper[4836]: I0217 14:16:29.765070 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:16:29 crc kubenswrapper[4836]: I0217 14:16:29.765757 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:16:59 crc kubenswrapper[4836]: I0217 14:16:59.765154 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:16:59 crc kubenswrapper[4836]: I0217 14:16:59.765991 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:16:59 crc kubenswrapper[4836]: I0217 14:16:59.766062 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:16:59 crc kubenswrapper[4836]: I0217 14:16:59.766963 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b2a0d64ec4a5faa95e6312a8de2b21c8f3e85f4d851c39760904a4b16753249"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:16:59 crc kubenswrapper[4836]: I0217 14:16:59.767112 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://1b2a0d64ec4a5faa95e6312a8de2b21c8f3e85f4d851c39760904a4b16753249" gracePeriod=600 Feb 17 14:17:00 crc kubenswrapper[4836]: I0217 14:17:00.605979 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="1b2a0d64ec4a5faa95e6312a8de2b21c8f3e85f4d851c39760904a4b16753249" exitCode=0 Feb 17 14:17:00 crc kubenswrapper[4836]: I0217 14:17:00.606988 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"1b2a0d64ec4a5faa95e6312a8de2b21c8f3e85f4d851c39760904a4b16753249"} Feb 17 14:17:00 crc kubenswrapper[4836]: I0217 14:17:00.607033 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"d7f43ee4be167fb696d056804834f76d74b6a96b2dd00fc7f1328e7b9c2e7869"} Feb 17 14:17:00 crc kubenswrapper[4836]: I0217 14:17:00.607059 4836 scope.go:117] "RemoveContainer" containerID="6ca471c2a83c51e21c02e6df84d64c6720d133c689bc0501ece1848cccb37b3b" Feb 17 14:17:14 crc kubenswrapper[4836]: I0217 14:17:14.922277 4836 scope.go:117] "RemoveContainer" containerID="56a4ac051fd52f2fd8e193686dffb745df251c7f892fec72d600a2fa80ecbd34" Feb 17 14:17:14 crc kubenswrapper[4836]: I0217 14:17:14.952679 4836 scope.go:117] "RemoveContainer" containerID="12b9c51f4d9306ca0c2b4adb55d1695962298f8f615d1a514d7884045bb5aea1" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.166386 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd"] Feb 17 14:17:30 crc kubenswrapper[4836]: E0217 14:17:30.167154 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a8fa91-916f-4b01-bf45-63e0add01572" containerName="collect-profiles" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.167170 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a8fa91-916f-4b01-bf45-63e0add01572" containerName="collect-profiles" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.167274 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a8fa91-916f-4b01-bf45-63e0add01572" containerName="collect-profiles" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.168367 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.170449 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.186435 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd"] Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.203326 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.203426 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.203460 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddgg8\" (UniqueName: \"kubernetes.io/projected/f611c52f-90dc-454e-8c3c-ca9d6a915f58-kube-api-access-ddgg8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.304325 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.304698 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.304808 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddgg8\" (UniqueName: \"kubernetes.io/projected/f611c52f-90dc-454e-8c3c-ca9d6a915f58-kube-api-access-ddgg8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.305334 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.305776 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.327056 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddgg8\" (UniqueName: \"kubernetes.io/projected/f611c52f-90dc-454e-8c3c-ca9d6a915f58-kube-api-access-ddgg8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.485720 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.688326 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd"] Feb 17 14:17:30 crc kubenswrapper[4836]: I0217 14:17:30.792821 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" event={"ID":"f611c52f-90dc-454e-8c3c-ca9d6a915f58","Type":"ContainerStarted","Data":"169f45bab279dc23066b81c45e03bb80038e08b42dbaf3f661014cf87fbe3efe"} Feb 17 14:17:31 crc kubenswrapper[4836]: I0217 14:17:31.800243 4836 generic.go:334] "Generic (PLEG): container finished" podID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerID="ce843b6fe3e045a48ce3a1314ded9e63587f96156cbcca0054c5f79af5057933" exitCode=0 Feb 17 14:17:31 crc kubenswrapper[4836]: I0217 14:17:31.800358 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" event={"ID":"f611c52f-90dc-454e-8c3c-ca9d6a915f58","Type":"ContainerDied","Data":"ce843b6fe3e045a48ce3a1314ded9e63587f96156cbcca0054c5f79af5057933"} Feb 17 14:17:31 crc kubenswrapper[4836]: I0217 14:17:31.802414 4836 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:17:33 crc kubenswrapper[4836]: I0217 14:17:33.817920 4836 generic.go:334] "Generic (PLEG): container finished" podID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerID="874ff80b58db2b7d602ff4671cc1d0299855d3670e45dd9330d8c2a7336c6ed8" exitCode=0 Feb 17 14:17:33 crc kubenswrapper[4836]: I0217 14:17:33.817973 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" event={"ID":"f611c52f-90dc-454e-8c3c-ca9d6a915f58","Type":"ContainerDied","Data":"874ff80b58db2b7d602ff4671cc1d0299855d3670e45dd9330d8c2a7336c6ed8"} Feb 17 14:17:34 crc kubenswrapper[4836]: I0217 14:17:34.826859 4836 generic.go:334] "Generic (PLEG): container finished" podID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerID="3564e75ebe2d2e80922083e1796e4178d0f8b5b3b276be4aecae53c87752f7fb" exitCode=0 Feb 17 14:17:34 crc kubenswrapper[4836]: I0217 14:17:34.826922 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" event={"ID":"f611c52f-90dc-454e-8c3c-ca9d6a915f58","Type":"ContainerDied","Data":"3564e75ebe2d2e80922083e1796e4178d0f8b5b3b276be4aecae53c87752f7fb"} Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.121074 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.188462 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddgg8\" (UniqueName: \"kubernetes.io/projected/f611c52f-90dc-454e-8c3c-ca9d6a915f58-kube-api-access-ddgg8\") pod \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.188510 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-bundle\") pod \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.188529 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-util\") pod \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\" (UID: \"f611c52f-90dc-454e-8c3c-ca9d6a915f58\") " Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.190587 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-bundle" (OuterVolumeSpecName: "bundle") pod "f611c52f-90dc-454e-8c3c-ca9d6a915f58" (UID: "f611c52f-90dc-454e-8c3c-ca9d6a915f58"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.193637 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f611c52f-90dc-454e-8c3c-ca9d6a915f58-kube-api-access-ddgg8" (OuterVolumeSpecName: "kube-api-access-ddgg8") pod "f611c52f-90dc-454e-8c3c-ca9d6a915f58" (UID: "f611c52f-90dc-454e-8c3c-ca9d6a915f58"). InnerVolumeSpecName "kube-api-access-ddgg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.208550 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-util" (OuterVolumeSpecName: "util") pod "f611c52f-90dc-454e-8c3c-ca9d6a915f58" (UID: "f611c52f-90dc-454e-8c3c-ca9d6a915f58"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.289790 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddgg8\" (UniqueName: \"kubernetes.io/projected/f611c52f-90dc-454e-8c3c-ca9d6a915f58-kube-api-access-ddgg8\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.289837 4836 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.289849 4836 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f611c52f-90dc-454e-8c3c-ca9d6a915f58-util\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.838947 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" event={"ID":"f611c52f-90dc-454e-8c3c-ca9d6a915f58","Type":"ContainerDied","Data":"169f45bab279dc23066b81c45e03bb80038e08b42dbaf3f661014cf87fbe3efe"} Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.838996 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169f45bab279dc23066b81c45e03bb80038e08b42dbaf3f661014cf87fbe3efe" Feb 17 14:17:36 crc kubenswrapper[4836]: I0217 14:17:36.839031 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.453476 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfznp"] Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.454652 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovn-controller" containerID="cri-o://c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2" gracePeriod=30 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.455318 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="sbdb" containerID="cri-o://0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e" gracePeriod=30 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.455394 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="nbdb" containerID="cri-o://f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad" gracePeriod=30 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.455464 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="northd" containerID="cri-o://47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a" gracePeriod=30 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.455528 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee" gracePeriod=30 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.455584 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kube-rbac-proxy-node" containerID="cri-o://ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b" gracePeriod=30 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.455631 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovn-acl-logging" containerID="cri-o://1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc" gracePeriod=30 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.494692 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" containerID="cri-o://61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" gracePeriod=30 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.791452 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/3.log" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.793067 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovn-acl-logging/0.log" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.793682 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovn-controller/0.log" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.794415 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.851817 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-log-socket\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.851865 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-netns\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.851907 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-ovn\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.851926 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-systemd\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.851949 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-node-log\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.851977 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-netd\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852001 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-slash\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852019 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852053 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-config\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852092 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-bin\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852109 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-openvswitch\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852134 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdwb\" (UniqueName: \"kubernetes.io/projected/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-kube-api-access-7zdwb\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852155 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-script-lib\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852176 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-systemd-units\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852198 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-etc-openvswitch\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852214 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-var-lib-openvswitch\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852236 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovn-node-metrics-cert\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852274 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-ovn-kubernetes\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852321 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-kubelet\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852344 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-env-overrides\") pod \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\" (UID: \"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e\") " Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.852967 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853014 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-log-socket" (OuterVolumeSpecName: "log-socket") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853034 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853051 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853499 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-node-log" (OuterVolumeSpecName: "node-log") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853570 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853593 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-slash" (OuterVolumeSpecName: "host-slash") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853613 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853767 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853829 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853876 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.853983 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.854021 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.854041 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.857596 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.857675 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.857811 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.859732 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-kube-api-access-7zdwb" (OuterVolumeSpecName: "kube-api-access-7zdwb") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "kube-api-access-7zdwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.860938 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.865603 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nb8gc"] Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.865868 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerName="util" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.865882 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerName="util" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.865950 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.865962 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.865975 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerName="extract" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.865984 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerName="extract" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.865995 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerName="pull" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866002 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerName="pull" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866013 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866020 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866028 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866035 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866044 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="nbdb" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866051 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="nbdb" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866064 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovn-acl-logging" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866071 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovn-acl-logging" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866081 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kube-rbac-proxy-node" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866089 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kube-rbac-proxy-node" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866101 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovn-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866108 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovn-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866121 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="northd" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866128 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="northd" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866137 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866145 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866153 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="sbdb" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866160 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="sbdb" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866195 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kubecfg-setup" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866202 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kubecfg-setup" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.866215 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866260 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866735 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kube-rbac-proxy-node" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.866757 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="sbdb" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867123 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867141 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867148 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867155 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867162 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovn-acl-logging" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867171 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="northd" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867179 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f611c52f-90dc-454e-8c3c-ca9d6a915f58" containerName="extract" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867187 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="nbdb" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867195 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovn-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867204 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.867528 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867543 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.867830 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerName="ovnkube-controller" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.868060 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovnkube-controller/3.log" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.870461 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovn-acl-logging/0.log" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871131 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfznp_67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/ovn-controller/0.log" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871789 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" exitCode=0 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871820 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e" exitCode=0 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871828 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad" exitCode=0 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871835 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a" exitCode=0 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871843 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee" exitCode=0 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871849 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b" exitCode=0 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871855 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc" exitCode=143 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.871863 4836 generic.go:334] "Generic (PLEG): container finished" podID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" containerID="c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2" exitCode=143 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.872032 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.872223 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" (UID: "67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.874931 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.875730 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876504 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876559 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876574 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876585 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876596 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876606 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876628 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876635 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876640 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876645 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876650 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876655 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876660 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876666 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876673 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876681 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876687 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876692 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876696 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876702 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876707 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876712 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876717 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876722 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876727 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876734 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876744 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876751 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876756 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876761 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876766 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876771 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876776 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876781 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876786 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876792 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876855 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfznp" event={"ID":"67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e","Type":"ContainerDied","Data":"3bdc7f19fb50c4c29fda01e2e231206d0048a98eab720a4ee93274d360c514d1"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876865 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876871 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876876 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876881 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876886 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876891 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876896 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876902 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876907 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876912 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876929 4836 scope.go:117] "RemoveContainer" containerID="61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.876977 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/2.log" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.877585 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/1.log" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.877619 4836 generic.go:334] "Generic (PLEG): container finished" podID="592aa549-1b1b-441e-93e4-0821e05ff2b2" containerID="d7051348fa11415bbd3ca42ccce04342cfc29fef1e5015e7fedf40514e49824c" exitCode=2 Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.877650 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c76cc" event={"ID":"592aa549-1b1b-441e-93e4-0821e05ff2b2","Type":"ContainerDied","Data":"d7051348fa11415bbd3ca42ccce04342cfc29fef1e5015e7fedf40514e49824c"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.877675 4836 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41"} Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.878134 4836 scope.go:117] "RemoveContainer" containerID="d7051348fa11415bbd3ca42ccce04342cfc29fef1e5015e7fedf40514e49824c" Feb 17 14:17:40 crc kubenswrapper[4836]: E0217 14:17:40.878368 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c76cc_openshift-multus(592aa549-1b1b-441e-93e4-0821e05ff2b2)\"" pod="openshift-multus/multus-c76cc" podUID="592aa549-1b1b-441e-93e4-0821e05ff2b2" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.898646 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954206 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-run-netns\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954269 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-etc-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954322 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-ovnkube-config\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954358 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-slash\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954380 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-node-log\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954408 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd545c20-7aca-4536-84b1-826c46c009f0-ovn-node-metrics-cert\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954511 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954604 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-cni-netd\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954689 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-cni-bin\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954745 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-kubelet\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954906 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n99sx\" (UniqueName: \"kubernetes.io/projected/dd545c20-7aca-4536-84b1-826c46c009f0-kube-api-access-n99sx\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954956 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-systemd\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.954989 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-log-socket\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955010 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-systemd-units\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955043 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955087 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-var-lib-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955119 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-ovn\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955166 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-env-overrides\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955186 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-run-ovn-kubernetes\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955510 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-ovnkube-script-lib\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955709 4836 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955729 4836 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955740 4836 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955751 4836 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955760 4836 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955770 4836 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955779 4836 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955786 4836 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955795 4836 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955804 4836 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955815 4836 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955828 4836 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955837 4836 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955845 4836 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955854 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdwb\" (UniqueName: \"kubernetes.io/projected/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-kube-api-access-7zdwb\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955863 4836 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955871 4836 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955882 4836 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955890 4836 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.955899 4836 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.963902 4836 scope.go:117] "RemoveContainer" containerID="0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e" Feb 17 14:17:40 crc kubenswrapper[4836]: I0217 14:17:40.987634 4836 scope.go:117] "RemoveContainer" containerID="f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.002037 4836 scope.go:117] "RemoveContainer" containerID="47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.014267 4836 scope.go:117] "RemoveContainer" containerID="bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.025731 4836 scope.go:117] "RemoveContainer" containerID="ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.036483 4836 scope.go:117] "RemoveContainer" containerID="1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.053513 4836 scope.go:117] "RemoveContainer" containerID="c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057090 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-cni-netd\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057131 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-kubelet\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057154 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-cni-bin\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057192 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n99sx\" (UniqueName: \"kubernetes.io/projected/dd545c20-7aca-4536-84b1-826c46c009f0-kube-api-access-n99sx\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057219 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-systemd\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057241 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-log-socket\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057254 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-cni-bin\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057263 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-systemd-units\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057313 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-systemd-units\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057317 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057314 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-kubelet\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057342 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057356 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-var-lib-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057389 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-ovn\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057389 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-systemd\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057422 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-env-overrides\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057435 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-var-lib-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057429 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-log-socket\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057472 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-run-ovn-kubernetes\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057463 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-ovn\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057447 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-run-ovn-kubernetes\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057588 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-ovnkube-script-lib\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057653 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-run-netns\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057696 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-etc-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057728 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-ovnkube-config\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057781 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-slash\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057818 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-node-log\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057861 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd545c20-7aca-4536-84b1-826c46c009f0-ovn-node-metrics-cert\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057904 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.057995 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-env-overrides\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.058050 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-run-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.058109 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-slash\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.058237 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-node-log\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.058358 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-ovnkube-script-lib\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.058372 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-run-netns\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.058372 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-etc-openvswitch\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.058430 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd545c20-7aca-4536-84b1-826c46c009f0-host-cni-netd\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.058615 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd545c20-7aca-4536-84b1-826c46c009f0-ovnkube-config\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.061762 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd545c20-7aca-4536-84b1-826c46c009f0-ovn-node-metrics-cert\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.070117 4836 scope.go:117] "RemoveContainer" containerID="81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.077351 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n99sx\" (UniqueName: \"kubernetes.io/projected/dd545c20-7aca-4536-84b1-826c46c009f0-kube-api-access-n99sx\") pod \"ovnkube-node-nb8gc\" (UID: \"dd545c20-7aca-4536-84b1-826c46c009f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.083013 4836 scope.go:117] "RemoveContainer" containerID="61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.083621 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": container with ID starting with 61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775 not found: ID does not exist" containerID="61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.083692 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} err="failed to get container status \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": rpc error: code = NotFound desc = could not find container \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": container with ID starting with 61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.083723 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.084068 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": container with ID starting with 38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83 not found: ID does not exist" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.084134 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} err="failed to get container status \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": rpc error: code = NotFound desc = could not find container \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": container with ID starting with 38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.084164 4836 scope.go:117] "RemoveContainer" containerID="0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.084519 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": container with ID starting with 0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e not found: ID does not exist" containerID="0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.084552 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} err="failed to get container status \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": rpc error: code = NotFound desc = could not find container \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": container with ID starting with 0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.084572 4836 scope.go:117] "RemoveContainer" containerID="f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.084797 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": container with ID starting with f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad not found: ID does not exist" containerID="f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.084900 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} err="failed to get container status \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": rpc error: code = NotFound desc = could not find container \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": container with ID starting with f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.084938 4836 scope.go:117] "RemoveContainer" containerID="47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.085189 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": container with ID starting with 47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a not found: ID does not exist" containerID="47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.085236 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} err="failed to get container status \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": rpc error: code = NotFound desc = could not find container \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": container with ID starting with 47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.085254 4836 scope.go:117] "RemoveContainer" containerID="bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.094940 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": container with ID starting with bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee not found: ID does not exist" containerID="bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.095103 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} err="failed to get container status \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": rpc error: code = NotFound desc = could not find container \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": container with ID starting with bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.095133 4836 scope.go:117] "RemoveContainer" containerID="ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.095709 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": container with ID starting with ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b not found: ID does not exist" containerID="ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.095741 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} err="failed to get container status \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": rpc error: code = NotFound desc = could not find container \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": container with ID starting with ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.095762 4836 scope.go:117] "RemoveContainer" containerID="1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.096410 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": container with ID starting with 1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc not found: ID does not exist" containerID="1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.096456 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} err="failed to get container status \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": rpc error: code = NotFound desc = could not find container \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": container with ID starting with 1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.096475 4836 scope.go:117] "RemoveContainer" containerID="c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.097551 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": container with ID starting with c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2 not found: ID does not exist" containerID="c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.097602 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} err="failed to get container status \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": rpc error: code = NotFound desc = could not find container \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": container with ID starting with c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.097638 4836 scope.go:117] "RemoveContainer" containerID="81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9" Feb 17 14:17:41 crc kubenswrapper[4836]: E0217 14:17:41.098049 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": container with ID starting with 81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9 not found: ID does not exist" containerID="81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.098068 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} err="failed to get container status \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": rpc error: code = NotFound desc = could not find container \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": container with ID starting with 81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.098081 4836 scope.go:117] "RemoveContainer" containerID="61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.098786 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} err="failed to get container status \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": rpc error: code = NotFound desc = could not find container \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": container with ID starting with 61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.098820 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.099352 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} err="failed to get container status \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": rpc error: code = NotFound desc = could not find container \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": container with ID starting with 38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.099376 4836 scope.go:117] "RemoveContainer" containerID="0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.099836 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} err="failed to get container status \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": rpc error: code = NotFound desc = could not find container \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": container with ID starting with 0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.099859 4836 scope.go:117] "RemoveContainer" containerID="f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.100246 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} err="failed to get container status \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": rpc error: code = NotFound desc = could not find container \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": container with ID starting with f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.100268 4836 scope.go:117] "RemoveContainer" containerID="47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.100673 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} err="failed to get container status \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": rpc error: code = NotFound desc = could not find container \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": container with ID starting with 47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.100691 4836 scope.go:117] "RemoveContainer" containerID="bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.101075 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} err="failed to get container status \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": rpc error: code = NotFound desc = could not find container \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": container with ID starting with bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.101092 4836 scope.go:117] "RemoveContainer" containerID="ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.101564 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} err="failed to get container status \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": rpc error: code = NotFound desc = could not find container \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": container with ID starting with ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.101659 4836 scope.go:117] "RemoveContainer" containerID="1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.102088 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} err="failed to get container status \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": rpc error: code = NotFound desc = could not find container \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": container with ID starting with 1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.102106 4836 scope.go:117] "RemoveContainer" containerID="c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.102448 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} err="failed to get container status \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": rpc error: code = NotFound desc = could not find container \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": container with ID starting with c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.102478 4836 scope.go:117] "RemoveContainer" containerID="81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.102798 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} err="failed to get container status \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": rpc error: code = NotFound desc = could not find container \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": container with ID starting with 81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.102825 4836 scope.go:117] "RemoveContainer" containerID="61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.103097 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} err="failed to get container status \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": rpc error: code = NotFound desc = could not find container \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": container with ID starting with 61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.103119 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.103440 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} err="failed to get container status \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": rpc error: code = NotFound desc = could not find container \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": container with ID starting with 38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.103477 4836 scope.go:117] "RemoveContainer" containerID="0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.103783 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} err="failed to get container status \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": rpc error: code = NotFound desc = could not find container \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": container with ID starting with 0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.103807 4836 scope.go:117] "RemoveContainer" containerID="f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.104168 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} err="failed to get container status \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": rpc error: code = NotFound desc = could not find container \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": container with ID starting with f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.104189 4836 scope.go:117] "RemoveContainer" containerID="47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.104495 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} err="failed to get container status \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": rpc error: code = NotFound desc = could not find container \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": container with ID starting with 47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.104518 4836 scope.go:117] "RemoveContainer" containerID="bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.104807 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} err="failed to get container status \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": rpc error: code = NotFound desc = could not find container \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": container with ID starting with bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.104828 4836 scope.go:117] "RemoveContainer" containerID="ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.105085 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} err="failed to get container status \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": rpc error: code = NotFound desc = could not find container \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": container with ID starting with ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.105106 4836 scope.go:117] "RemoveContainer" containerID="1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.105359 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} err="failed to get container status \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": rpc error: code = NotFound desc = could not find container \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": container with ID starting with 1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.105396 4836 scope.go:117] "RemoveContainer" containerID="c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.105733 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} err="failed to get container status \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": rpc error: code = NotFound desc = could not find container \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": container with ID starting with c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.105756 4836 scope.go:117] "RemoveContainer" containerID="81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.106024 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} err="failed to get container status \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": rpc error: code = NotFound desc = could not find container \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": container with ID starting with 81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.106049 4836 scope.go:117] "RemoveContainer" containerID="61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.106489 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} err="failed to get container status \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": rpc error: code = NotFound desc = could not find container \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": container with ID starting with 61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.106523 4836 scope.go:117] "RemoveContainer" containerID="38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.106825 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83"} err="failed to get container status \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": rpc error: code = NotFound desc = could not find container \"38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83\": container with ID starting with 38fa1bf6b59359eb32319ce0184baa7dac44d10d2a0d9502169ea98eb3408c83 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.106846 4836 scope.go:117] "RemoveContainer" containerID="0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.107199 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e"} err="failed to get container status \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": rpc error: code = NotFound desc = could not find container \"0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e\": container with ID starting with 0c8d2edfc6a88391d034f310661e09eb58503f33f015b5e067654db06e8b152e not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.107222 4836 scope.go:117] "RemoveContainer" containerID="f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.107586 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad"} err="failed to get container status \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": rpc error: code = NotFound desc = could not find container \"f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad\": container with ID starting with f5576148df454de0fb3d15ea9de93736aab907ca8cf09669c3c25fde79ec6fad not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.107619 4836 scope.go:117] "RemoveContainer" containerID="47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.107930 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a"} err="failed to get container status \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": rpc error: code = NotFound desc = could not find container \"47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a\": container with ID starting with 47fdc413501c45a2d6e531a1427fd6df3e9d2c57e3d60098b31fbfd695d98a0a not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.107949 4836 scope.go:117] "RemoveContainer" containerID="bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.108382 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee"} err="failed to get container status \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": rpc error: code = NotFound desc = could not find container \"bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee\": container with ID starting with bb05318b17722c3f731dc976d38c05a4ab7eafd686ad0fcf17bd01032409eaee not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.108412 4836 scope.go:117] "RemoveContainer" containerID="ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.108656 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b"} err="failed to get container status \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": rpc error: code = NotFound desc = could not find container \"ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b\": container with ID starting with ebae2d4e3375fda616b0ea16c0c86a5c1c36004e7d22873a289ff37874bbc74b not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.108673 4836 scope.go:117] "RemoveContainer" containerID="1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.108953 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc"} err="failed to get container status \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": rpc error: code = NotFound desc = could not find container \"1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc\": container with ID starting with 1d36d269996169583e4e181aac060491640e74df94385276752f460c408273cc not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.108976 4836 scope.go:117] "RemoveContainer" containerID="c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.109244 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2"} err="failed to get container status \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": rpc error: code = NotFound desc = could not find container \"c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2\": container with ID starting with c6545ec908eecb35a4405050dd77343f0acd4ec7d54ec55853a926e00b7f37f2 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.109281 4836 scope.go:117] "RemoveContainer" containerID="81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.109585 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9"} err="failed to get container status \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": rpc error: code = NotFound desc = could not find container \"81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9\": container with ID starting with 81e6d07edc7d7857cb288980bc7e9d5002234c2fe642df6798611b150b6d66e9 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.109615 4836 scope.go:117] "RemoveContainer" containerID="61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.109967 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775"} err="failed to get container status \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": rpc error: code = NotFound desc = could not find container \"61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775\": container with ID starting with 61bfc9a285863adea7e23ce49aaa4e592d2fd16c4e2e57632d659c3574f51775 not found: ID does not exist" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.199885 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.409852 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfznp"] Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.428225 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfznp"] Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.888197 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerDied","Data":"58e884a408a1fef2d13b71fe9ee8454ee2da8617e8e4073b51bdec20c186e183"} Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.888176 4836 generic.go:334] "Generic (PLEG): container finished" podID="dd545c20-7aca-4536-84b1-826c46c009f0" containerID="58e884a408a1fef2d13b71fe9ee8454ee2da8617e8e4073b51bdec20c186e183" exitCode=0 Feb 17 14:17:41 crc kubenswrapper[4836]: I0217 14:17:41.889571 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"942b9b8eb130dc6de6a87582e54cb8ca5624dfd821110527917b3ae4a552a902"} Feb 17 14:17:42 crc kubenswrapper[4836]: I0217 14:17:42.580213 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e" path="/var/lib/kubelet/pods/67e8cda7-ec53-43bd-9fec-8ac4d6ecc26e/volumes" Feb 17 14:17:42 crc kubenswrapper[4836]: I0217 14:17:42.898741 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"25737070acaff4cc5b998dfce6cc5ffcc9c81fcbac86a43c4d8b7f733a985974"} Feb 17 14:17:42 crc kubenswrapper[4836]: I0217 14:17:42.899120 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"d80ba1c6a69894dc15c15503c473f395763b874520ec225059efa316c66fc809"} Feb 17 14:17:42 crc kubenswrapper[4836]: I0217 14:17:42.899135 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"3e077857f671b4f7efe7c63fe55ad8a73a20c1a61c8713072c3124c3ed7a902a"} Feb 17 14:17:42 crc kubenswrapper[4836]: I0217 14:17:42.899148 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"69657103e4ae85731199e40014c18cac79931a53680f577dc479eed9c19ee7ba"} Feb 17 14:17:42 crc kubenswrapper[4836]: I0217 14:17:42.899159 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"c016150b9b2c5c308f421584fa3afce1a66602ef23b87219a13f0ab0cb7f4f15"} Feb 17 14:17:43 crc kubenswrapper[4836]: I0217 14:17:43.910671 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"71bbaacd7163148d079b9df98584e0d2a9d4d47c73e2db23db80124b13e0a1ce"} Feb 17 14:17:45 crc kubenswrapper[4836]: I0217 14:17:45.945981 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"34cb30fd8fa30cf5a52c92154e6dac1df01f64848e689157958282dc444bae67"} Feb 17 14:17:49 crc kubenswrapper[4836]: I0217 14:17:49.395757 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" event={"ID":"dd545c20-7aca-4536-84b1-826c46c009f0","Type":"ContainerStarted","Data":"40a86bd99cf3095d192bcd44811f4400a5de72117342bad74fe92645111c5f1e"} Feb 17 14:17:49 crc kubenswrapper[4836]: I0217 14:17:49.397730 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:49 crc kubenswrapper[4836]: I0217 14:17:49.397784 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:49 crc kubenswrapper[4836]: I0217 14:17:49.397797 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:49 crc kubenswrapper[4836]: I0217 14:17:49.492711 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:49 crc kubenswrapper[4836]: I0217 14:17:49.521966 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" podStartSLOduration=9.521918497 podStartE2EDuration="9.521918497s" podCreationTimestamp="2026-02-17 14:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:17:49.51589311 +0000 UTC m=+695.858821399" watchObservedRunningTime="2026-02-17 14:17:49.521918497 +0000 UTC m=+695.864846776" Feb 17 14:17:49 crc kubenswrapper[4836]: I0217 14:17:49.684126 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.215041 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk"] Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.215877 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.218193 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-2qj7z" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.218223 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.219830 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.347081 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm"] Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.347779 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.350742 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.350916 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-q77cg" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.366250 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr"] Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.366928 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:51 crc kubenswrapper[4836]: I0217 14:17:51.384492 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6j9\" (UniqueName: \"kubernetes.io/projected/755bc851-3fff-45db-bbcf-164a27afcf85-kube-api-access-pf6j9\") pod \"obo-prometheus-operator-68bc856cb9-xm2rk\" (UID: \"755bc851-3fff-45db-bbcf-164a27afcf85\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.102408 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce0a3fd2-d84a-417c-bd46-c0dba979376e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr\" (UID: \"ce0a3fd2-d84a-417c-bd46-c0dba979376e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.102505 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a9fdae1-f115-4e94-9b72-026862e02026-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm\" (UID: \"5a9fdae1-f115-4e94-9b72-026862e02026\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.131414 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf6j9\" (UniqueName: \"kubernetes.io/projected/755bc851-3fff-45db-bbcf-164a27afcf85-kube-api-access-pf6j9\") pod \"obo-prometheus-operator-68bc856cb9-xm2rk\" (UID: \"755bc851-3fff-45db-bbcf-164a27afcf85\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.131478 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce0a3fd2-d84a-417c-bd46-c0dba979376e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr\" (UID: \"ce0a3fd2-d84a-417c-bd46-c0dba979376e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.131537 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a9fdae1-f115-4e94-9b72-026862e02026-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm\" (UID: \"5a9fdae1-f115-4e94-9b72-026862e02026\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.139196 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-f94f2"] Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.140204 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.148229 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-5k2fx" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.148736 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.164966 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf6j9\" (UniqueName: \"kubernetes.io/projected/755bc851-3fff-45db-bbcf-164a27afcf85-kube-api-access-pf6j9\") pod \"obo-prometheus-operator-68bc856cb9-xm2rk\" (UID: \"755bc851-3fff-45db-bbcf-164a27afcf85\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.232877 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce0a3fd2-d84a-417c-bd46-c0dba979376e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr\" (UID: \"ce0a3fd2-d84a-417c-bd46-c0dba979376e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.232938 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a9fdae1-f115-4e94-9b72-026862e02026-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm\" (UID: \"5a9fdae1-f115-4e94-9b72-026862e02026\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.232962 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce0a3fd2-d84a-417c-bd46-c0dba979376e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr\" (UID: \"ce0a3fd2-d84a-417c-bd46-c0dba979376e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.232986 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a9fdae1-f115-4e94-9b72-026862e02026-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm\" (UID: \"5a9fdae1-f115-4e94-9b72-026862e02026\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.236846 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a9fdae1-f115-4e94-9b72-026862e02026-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm\" (UID: \"5a9fdae1-f115-4e94-9b72-026862e02026\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.239325 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a9fdae1-f115-4e94-9b72-026862e02026-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm\" (UID: \"5a9fdae1-f115-4e94-9b72-026862e02026\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.239462 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce0a3fd2-d84a-417c-bd46-c0dba979376e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr\" (UID: \"ce0a3fd2-d84a-417c-bd46-c0dba979376e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.256313 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce0a3fd2-d84a-417c-bd46-c0dba979376e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr\" (UID: \"ce0a3fd2-d84a-417c-bd46-c0dba979376e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.263715 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.281676 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.314584 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(a102d7c03f3108e36b7bfb56594ed6512f487c682ec449ba153bc3d9cab5724a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.314748 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(a102d7c03f3108e36b7bfb56594ed6512f487c682ec449ba153bc3d9cab5724a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.314787 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(a102d7c03f3108e36b7bfb56594ed6512f487c682ec449ba153bc3d9cab5724a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.314847 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators(5a9fdae1-f115-4e94-9b72-026862e02026)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators(5a9fdae1-f115-4e94-9b72-026862e02026)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(a102d7c03f3108e36b7bfb56594ed6512f487c682ec449ba153bc3d9cab5724a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" podUID="5a9fdae1-f115-4e94-9b72-026862e02026" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.323054 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vqhkf"] Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.323802 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.326074 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-2z5nq" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.328099 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(89d09f6731e23af0db5a3e6650e8f1bff1c0eeaf9b9d3946e1523c677187560c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.328141 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(89d09f6731e23af0db5a3e6650e8f1bff1c0eeaf9b9d3946e1523c677187560c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.328162 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(89d09f6731e23af0db5a3e6650e8f1bff1c0eeaf9b9d3946e1523c677187560c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.328208 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators(ce0a3fd2-d84a-417c-bd46-c0dba979376e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators(ce0a3fd2-d84a-417c-bd46-c0dba979376e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(89d09f6731e23af0db5a3e6650e8f1bff1c0eeaf9b9d3946e1523c677187560c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" podUID="ce0a3fd2-d84a-417c-bd46-c0dba979376e" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.333946 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578-observability-operator-tls\") pod \"observability-operator-59bdc8b94-f94f2\" (UID: \"d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578\") " pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.334040 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbwp7\" (UniqueName: \"kubernetes.io/projected/d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578-kube-api-access-pbwp7\") pod \"observability-operator-59bdc8b94-f94f2\" (UID: \"d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578\") " pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.435064 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578-observability-operator-tls\") pod \"observability-operator-59bdc8b94-f94f2\" (UID: \"d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578\") " pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.435129 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b6d996-7a86-4512-825f-6e6d34148862-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vqhkf\" (UID: \"c4b6d996-7a86-4512-825f-6e6d34148862\") " pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.435161 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbwp7\" (UniqueName: \"kubernetes.io/projected/d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578-kube-api-access-pbwp7\") pod \"observability-operator-59bdc8b94-f94f2\" (UID: \"d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578\") " pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.435209 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwc4k\" (UniqueName: \"kubernetes.io/projected/c4b6d996-7a86-4512-825f-6e6d34148862-kube-api-access-lwc4k\") pod \"perses-operator-5bf474d74f-vqhkf\" (UID: \"c4b6d996-7a86-4512-825f-6e6d34148862\") " pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.439638 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.440444 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578-observability-operator-tls\") pod \"observability-operator-59bdc8b94-f94f2\" (UID: \"d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578\") " pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.456375 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbwp7\" (UniqueName: \"kubernetes.io/projected/d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578-kube-api-access-pbwp7\") pod \"observability-operator-59bdc8b94-f94f2\" (UID: \"d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578\") " pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.474037 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(a73fa1b3f7aea1c4edcde216ce381c1d7b965091a0920e08a20d46cf7f096c67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.474107 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(a73fa1b3f7aea1c4edcde216ce381c1d7b965091a0920e08a20d46cf7f096c67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.474129 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(a73fa1b3f7aea1c4edcde216ce381c1d7b965091a0920e08a20d46cf7f096c67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.474171 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators(755bc851-3fff-45db-bbcf-164a27afcf85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators(755bc851-3fff-45db-bbcf-164a27afcf85)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(a73fa1b3f7aea1c4edcde216ce381c1d7b965091a0920e08a20d46cf7f096c67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" podUID="755bc851-3fff-45db-bbcf-164a27afcf85" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.499381 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.529438 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(7374031d3926788c7257128305991081ecceefe7149cfab81e10d7ed4f67c598): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.529501 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(7374031d3926788c7257128305991081ecceefe7149cfab81e10d7ed4f67c598): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.529527 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(7374031d3926788c7257128305991081ecceefe7149cfab81e10d7ed4f67c598): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.529593 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-f94f2_openshift-operators(d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-f94f2_openshift-operators(d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(7374031d3926788c7257128305991081ecceefe7149cfab81e10d7ed4f67c598): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" podUID="d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.537751 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b6d996-7a86-4512-825f-6e6d34148862-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vqhkf\" (UID: \"c4b6d996-7a86-4512-825f-6e6d34148862\") " pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.537795 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwc4k\" (UniqueName: \"kubernetes.io/projected/c4b6d996-7a86-4512-825f-6e6d34148862-kube-api-access-lwc4k\") pod \"perses-operator-5bf474d74f-vqhkf\" (UID: \"c4b6d996-7a86-4512-825f-6e6d34148862\") " pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.539062 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b6d996-7a86-4512-825f-6e6d34148862-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vqhkf\" (UID: \"c4b6d996-7a86-4512-825f-6e6d34148862\") " pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.582120 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwc4k\" (UniqueName: \"kubernetes.io/projected/c4b6d996-7a86-4512-825f-6e6d34148862-kube-api-access-lwc4k\") pod \"perses-operator-5bf474d74f-vqhkf\" (UID: \"c4b6d996-7a86-4512-825f-6e6d34148862\") " pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.626109 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr"] Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.638495 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm"] Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.642474 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vqhkf"] Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.644998 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-f94f2"] Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.652332 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: I0217 14:17:52.676264 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk"] Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.691768 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(c8518216692ffff85a5810035447a57077c2cfa6128632f416a37aafc5adc68e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.691880 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(c8518216692ffff85a5810035447a57077c2cfa6128632f416a37aafc5adc68e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.691916 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(c8518216692ffff85a5810035447a57077c2cfa6128632f416a37aafc5adc68e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:52 crc kubenswrapper[4836]: E0217 14:17:52.691983 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vqhkf_openshift-operators(c4b6d996-7a86-4512-825f-6e6d34148862)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vqhkf_openshift-operators(c4b6d996-7a86-4512-825f-6e6d34148862)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(c8518216692ffff85a5810035447a57077c2cfa6128632f416a37aafc5adc68e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" podUID="c4b6d996-7a86-4512-825f-6e6d34148862" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214014 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214054 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214024 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214253 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214465 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214721 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214737 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214890 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.214945 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.215489 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.316518 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(c7b2d209d7ed1f8aefeae299e81c8622fc3e16d43f2d3f86380300c5ce4f151d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.316590 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(c7b2d209d7ed1f8aefeae299e81c8622fc3e16d43f2d3f86380300c5ce4f151d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.316613 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(c7b2d209d7ed1f8aefeae299e81c8622fc3e16d43f2d3f86380300c5ce4f151d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.316664 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators(ce0a3fd2-d84a-417c-bd46-c0dba979376e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators(ce0a3fd2-d84a-417c-bd46-c0dba979376e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(c7b2d209d7ed1f8aefeae299e81c8622fc3e16d43f2d3f86380300c5ce4f151d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" podUID="ce0a3fd2-d84a-417c-bd46-c0dba979376e" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.329969 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(a8b8939fd6aa2167a9f7f62aa79f4d937e00d5357472be28cc1a68701f421b45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.330053 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(a8b8939fd6aa2167a9f7f62aa79f4d937e00d5357472be28cc1a68701f421b45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.330079 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(a8b8939fd6aa2167a9f7f62aa79f4d937e00d5357472be28cc1a68701f421b45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.330125 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vqhkf_openshift-operators(c4b6d996-7a86-4512-825f-6e6d34148862)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vqhkf_openshift-operators(c4b6d996-7a86-4512-825f-6e6d34148862)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(a8b8939fd6aa2167a9f7f62aa79f4d937e00d5357472be28cc1a68701f421b45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" podUID="c4b6d996-7a86-4512-825f-6e6d34148862" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.348524 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(2ffa99ee1d8856d93373ba1de1f5431c31896fd44a02167ac392bee540256a8b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.348607 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(2ffa99ee1d8856d93373ba1de1f5431c31896fd44a02167ac392bee540256a8b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.348647 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(2ffa99ee1d8856d93373ba1de1f5431c31896fd44a02167ac392bee540256a8b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.348695 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators(5a9fdae1-f115-4e94-9b72-026862e02026)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators(5a9fdae1-f115-4e94-9b72-026862e02026)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(2ffa99ee1d8856d93373ba1de1f5431c31896fd44a02167ac392bee540256a8b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" podUID="5a9fdae1-f115-4e94-9b72-026862e02026" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.356239 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(6717d87401b60614c8c16cf675baf03575f29616b818320e25760ec5e7aa98e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.356313 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(6717d87401b60614c8c16cf675baf03575f29616b818320e25760ec5e7aa98e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.356333 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(6717d87401b60614c8c16cf675baf03575f29616b818320e25760ec5e7aa98e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.356440 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators(755bc851-3fff-45db-bbcf-164a27afcf85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators(755bc851-3fff-45db-bbcf-164a27afcf85)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(6717d87401b60614c8c16cf675baf03575f29616b818320e25760ec5e7aa98e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" podUID="755bc851-3fff-45db-bbcf-164a27afcf85" Feb 17 14:17:53 crc kubenswrapper[4836]: I0217 14:17:53.567661 4836 scope.go:117] "RemoveContainer" containerID="d7051348fa11415bbd3ca42ccce04342cfc29fef1e5015e7fedf40514e49824c" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.568327 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c76cc_openshift-multus(592aa549-1b1b-441e-93e4-0821e05ff2b2)\"" pod="openshift-multus/multus-c76cc" podUID="592aa549-1b1b-441e-93e4-0821e05ff2b2" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.720028 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(76117cbc5c70c88088cbf577789548acc2269123c8e5cfa8d7c9d9a9894446db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.720093 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(76117cbc5c70c88088cbf577789548acc2269123c8e5cfa8d7c9d9a9894446db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.720114 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(76117cbc5c70c88088cbf577789548acc2269123c8e5cfa8d7c9d9a9894446db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:17:53 crc kubenswrapper[4836]: E0217 14:17:53.720157 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-f94f2_openshift-operators(d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-f94f2_openshift-operators(d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(76117cbc5c70c88088cbf577789548acc2269123c8e5cfa8d7c9d9a9894446db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" podUID="d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578" Feb 17 14:18:06 crc kubenswrapper[4836]: I0217 14:18:06.567596 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:18:06 crc kubenswrapper[4836]: I0217 14:18:06.568828 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:18:06 crc kubenswrapper[4836]: E0217 14:18:06.600375 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(cdb695baf8e9d67708026ebd9d849b6f54a9010f36c3c63ddc9fde83ccd92990): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:18:06 crc kubenswrapper[4836]: E0217 14:18:06.600633 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(cdb695baf8e9d67708026ebd9d849b6f54a9010f36c3c63ddc9fde83ccd92990): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:18:06 crc kubenswrapper[4836]: E0217 14:18:06.600655 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(cdb695baf8e9d67708026ebd9d849b6f54a9010f36c3c63ddc9fde83ccd92990): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:18:06 crc kubenswrapper[4836]: E0217 14:18:06.600701 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vqhkf_openshift-operators(c4b6d996-7a86-4512-825f-6e6d34148862)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vqhkf_openshift-operators(c4b6d996-7a86-4512-825f-6e6d34148862)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vqhkf_openshift-operators_c4b6d996-7a86-4512-825f-6e6d34148862_0(cdb695baf8e9d67708026ebd9d849b6f54a9010f36c3c63ddc9fde83ccd92990): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" podUID="c4b6d996-7a86-4512-825f-6e6d34148862" Feb 17 14:18:07 crc kubenswrapper[4836]: I0217 14:18:07.567650 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:18:07 crc kubenswrapper[4836]: I0217 14:18:07.568327 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:18:07 crc kubenswrapper[4836]: I0217 14:18:07.568949 4836 scope.go:117] "RemoveContainer" containerID="d7051348fa11415bbd3ca42ccce04342cfc29fef1e5015e7fedf40514e49824c" Feb 17 14:18:07 crc kubenswrapper[4836]: E0217 14:18:07.683628 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(7616aaca5ba6e128f1edcce6dab39e57b5d0e5b83eb05aa68fcb8f0aa79fae2f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:18:07 crc kubenswrapper[4836]: E0217 14:18:07.683736 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(7616aaca5ba6e128f1edcce6dab39e57b5d0e5b83eb05aa68fcb8f0aa79fae2f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:18:07 crc kubenswrapper[4836]: E0217 14:18:07.683765 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(7616aaca5ba6e128f1edcce6dab39e57b5d0e5b83eb05aa68fcb8f0aa79fae2f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:18:07 crc kubenswrapper[4836]: E0217 14:18:07.683833 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators(ce0a3fd2-d84a-417c-bd46-c0dba979376e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators(ce0a3fd2-d84a-417c-bd46-c0dba979376e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_openshift-operators_ce0a3fd2-d84a-417c-bd46-c0dba979376e_0(7616aaca5ba6e128f1edcce6dab39e57b5d0e5b83eb05aa68fcb8f0aa79fae2f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" podUID="ce0a3fd2-d84a-417c-bd46-c0dba979376e" Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.365732 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/2.log" Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.368802 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/1.log" Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.368856 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c76cc" event={"ID":"592aa549-1b1b-441e-93e4-0821e05ff2b2","Type":"ContainerStarted","Data":"8c74a5866188271c5111852363699b9b2a7c209b6cfe49d8ec0dd64613ff8db7"} Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.571810 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.572322 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.572583 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.572795 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.572991 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:18:08 crc kubenswrapper[4836]: I0217 14:18:08.573200 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.624712 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(bbc9c8e79e3ea2f00f0d21fb0dc91089d8e1f283b9575b8718b508c7fa08ab63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.624820 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(bbc9c8e79e3ea2f00f0d21fb0dc91089d8e1f283b9575b8718b508c7fa08ab63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.624840 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(bbc9c8e79e3ea2f00f0d21fb0dc91089d8e1f283b9575b8718b508c7fa08ab63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.624903 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators(755bc851-3fff-45db-bbcf-164a27afcf85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators(755bc851-3fff-45db-bbcf-164a27afcf85)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xm2rk_openshift-operators_755bc851-3fff-45db-bbcf-164a27afcf85_0(bbc9c8e79e3ea2f00f0d21fb0dc91089d8e1f283b9575b8718b508c7fa08ab63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" podUID="755bc851-3fff-45db-bbcf-164a27afcf85" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.629089 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(7b158f726609e5a26d4ec63e2ad43d6a3b9f29149d2ddf5dbb69b98f81a90f71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.629171 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(7b158f726609e5a26d4ec63e2ad43d6a3b9f29149d2ddf5dbb69b98f81a90f71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.629201 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(7b158f726609e5a26d4ec63e2ad43d6a3b9f29149d2ddf5dbb69b98f81a90f71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.629258 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-f94f2_openshift-operators(d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-f94f2_openshift-operators(d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-f94f2_openshift-operators_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578_0(7b158f726609e5a26d4ec63e2ad43d6a3b9f29149d2ddf5dbb69b98f81a90f71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" podUID="d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.633936 4836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(3a523f82bd10b53ba0c703d2e41044c6ff1f70dabeb8d55c7e592033b07e380b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.633987 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(3a523f82bd10b53ba0c703d2e41044c6ff1f70dabeb8d55c7e592033b07e380b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.634007 4836 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(3a523f82bd10b53ba0c703d2e41044c6ff1f70dabeb8d55c7e592033b07e380b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:18:08 crc kubenswrapper[4836]: E0217 14:18:08.634048 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators(5a9fdae1-f115-4e94-9b72-026862e02026)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators(5a9fdae1-f115-4e94-9b72-026862e02026)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_openshift-operators_5a9fdae1-f115-4e94-9b72-026862e02026_0(3a523f82bd10b53ba0c703d2e41044c6ff1f70dabeb8d55c7e592033b07e380b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" podUID="5a9fdae1-f115-4e94-9b72-026862e02026" Feb 17 14:18:11 crc kubenswrapper[4836]: I0217 14:18:11.273875 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nb8gc" Feb 17 14:18:15 crc kubenswrapper[4836]: I0217 14:18:15.008796 4836 scope.go:117] "RemoveContainer" containerID="b64d012815880d0ba6314438a96b0ffd6bdad4678135a9bd3c9c2a4d6eb83c41" Feb 17 14:18:15 crc kubenswrapper[4836]: I0217 14:18:15.413176 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c76cc_592aa549-1b1b-441e-93e4-0821e05ff2b2/kube-multus/2.log" Feb 17 14:18:20 crc kubenswrapper[4836]: I0217 14:18:20.567979 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:18:20 crc kubenswrapper[4836]: I0217 14:18:20.568224 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:18:20 crc kubenswrapper[4836]: I0217 14:18:20.569366 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" Feb 17 14:18:20 crc kubenswrapper[4836]: I0217 14:18:20.569834 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:18:20 crc kubenswrapper[4836]: I0217 14:18:20.885784 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr"] Feb 17 14:18:20 crc kubenswrapper[4836]: W0217 14:18:20.894478 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce0a3fd2_d84a_417c_bd46_c0dba979376e.slice/crio-49fefc9e7d18ab8caf445571ee5cc595e92005bf37309aed0e22e7f547957a34 WatchSource:0}: Error finding container 49fefc9e7d18ab8caf445571ee5cc595e92005bf37309aed0e22e7f547957a34: Status 404 returned error can't find the container with id 49fefc9e7d18ab8caf445571ee5cc595e92005bf37309aed0e22e7f547957a34 Feb 17 14:18:20 crc kubenswrapper[4836]: I0217 14:18:20.912110 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vqhkf"] Feb 17 14:18:20 crc kubenswrapper[4836]: W0217 14:18:20.917646 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4b6d996_7a86_4512_825f_6e6d34148862.slice/crio-dd7def9354ed9beb2f0764687f14d31da29f379e91018e84c3758233b0d4fbd6 WatchSource:0}: Error finding container dd7def9354ed9beb2f0764687f14d31da29f379e91018e84c3758233b0d4fbd6: Status 404 returned error can't find the container with id dd7def9354ed9beb2f0764687f14d31da29f379e91018e84c3758233b0d4fbd6 Feb 17 14:18:21 crc kubenswrapper[4836]: I0217 14:18:21.450218 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" event={"ID":"c4b6d996-7a86-4512-825f-6e6d34148862","Type":"ContainerStarted","Data":"dd7def9354ed9beb2f0764687f14d31da29f379e91018e84c3758233b0d4fbd6"} Feb 17 14:18:21 crc kubenswrapper[4836]: I0217 14:18:21.451525 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" event={"ID":"ce0a3fd2-d84a-417c-bd46-c0dba979376e","Type":"ContainerStarted","Data":"49fefc9e7d18ab8caf445571ee5cc595e92005bf37309aed0e22e7f547957a34"} Feb 17 14:18:21 crc kubenswrapper[4836]: I0217 14:18:21.567435 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:18:21 crc kubenswrapper[4836]: I0217 14:18:21.568263 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" Feb 17 14:18:21 crc kubenswrapper[4836]: I0217 14:18:21.815958 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk"] Feb 17 14:18:21 crc kubenswrapper[4836]: W0217 14:18:21.827578 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod755bc851_3fff_45db_bbcf_164a27afcf85.slice/crio-06c320e967b08b80978d88908ead978940ea5cebdc861719314ea2cc3c71cb82 WatchSource:0}: Error finding container 06c320e967b08b80978d88908ead978940ea5cebdc861719314ea2cc3c71cb82: Status 404 returned error can't find the container with id 06c320e967b08b80978d88908ead978940ea5cebdc861719314ea2cc3c71cb82 Feb 17 14:18:22 crc kubenswrapper[4836]: I0217 14:18:22.460454 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" event={"ID":"755bc851-3fff-45db-bbcf-164a27afcf85","Type":"ContainerStarted","Data":"06c320e967b08b80978d88908ead978940ea5cebdc861719314ea2cc3c71cb82"} Feb 17 14:18:22 crc kubenswrapper[4836]: I0217 14:18:22.567466 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:18:22 crc kubenswrapper[4836]: I0217 14:18:22.567530 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:18:22 crc kubenswrapper[4836]: I0217 14:18:22.568041 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" Feb 17 14:18:22 crc kubenswrapper[4836]: I0217 14:18:22.568864 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:18:22 crc kubenswrapper[4836]: I0217 14:18:22.959423 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-f94f2"] Feb 17 14:18:22 crc kubenswrapper[4836]: W0217 14:18:22.967471 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6acbcf2_dfc0_4a7b_b6bd_4b62c0b03578.slice/crio-d51bb9e93098b5cd655edc904abc10f7c5c285cca8b23062fb820b727c6d6941 WatchSource:0}: Error finding container d51bb9e93098b5cd655edc904abc10f7c5c285cca8b23062fb820b727c6d6941: Status 404 returned error can't find the container with id d51bb9e93098b5cd655edc904abc10f7c5c285cca8b23062fb820b727c6d6941 Feb 17 14:18:23 crc kubenswrapper[4836]: I0217 14:18:23.014074 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm"] Feb 17 14:18:23 crc kubenswrapper[4836]: W0217 14:18:23.026992 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a9fdae1_f115_4e94_9b72_026862e02026.slice/crio-054a3ff7a9e29c02b401290c1f9213e27ab3854468251eb7c846f66b3f7ecd60 WatchSource:0}: Error finding container 054a3ff7a9e29c02b401290c1f9213e27ab3854468251eb7c846f66b3f7ecd60: Status 404 returned error can't find the container with id 054a3ff7a9e29c02b401290c1f9213e27ab3854468251eb7c846f66b3f7ecd60 Feb 17 14:18:23 crc kubenswrapper[4836]: I0217 14:18:23.478513 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" event={"ID":"d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578","Type":"ContainerStarted","Data":"d51bb9e93098b5cd655edc904abc10f7c5c285cca8b23062fb820b727c6d6941"} Feb 17 14:18:23 crc kubenswrapper[4836]: I0217 14:18:23.483705 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" event={"ID":"5a9fdae1-f115-4e94-9b72-026862e02026","Type":"ContainerStarted","Data":"054a3ff7a9e29c02b401290c1f9213e27ab3854468251eb7c846f66b3f7ecd60"} Feb 17 14:18:29 crc kubenswrapper[4836]: I0217 14:18:29.943318 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" event={"ID":"755bc851-3fff-45db-bbcf-164a27afcf85","Type":"ContainerStarted","Data":"f91bde5f52256cdc7cbbe106e02425ea11ef3ae0650501533003424513157952"} Feb 17 14:18:29 crc kubenswrapper[4836]: I0217 14:18:29.945042 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" event={"ID":"5a9fdae1-f115-4e94-9b72-026862e02026","Type":"ContainerStarted","Data":"4e57391a3d6ce3e1fa73c5de27f3090dd1065a387940c00c99871885ae404b29"} Feb 17 14:18:29 crc kubenswrapper[4836]: I0217 14:18:29.947205 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" event={"ID":"c4b6d996-7a86-4512-825f-6e6d34148862","Type":"ContainerStarted","Data":"c8662ff9c387493dd51bf7acbf6d1af480903838614b3eb0799c84a871a68b87"} Feb 17 14:18:29 crc kubenswrapper[4836]: I0217 14:18:29.947345 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:18:29 crc kubenswrapper[4836]: I0217 14:18:29.949783 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" event={"ID":"ce0a3fd2-d84a-417c-bd46-c0dba979376e","Type":"ContainerStarted","Data":"68116ebd01f159fd46c207f99b23289dcfceebb669691ac495307e644c1a63af"} Feb 17 14:18:29 crc kubenswrapper[4836]: I0217 14:18:29.974733 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xm2rk" podStartSLOduration=31.327418557 podStartE2EDuration="38.974687349s" podCreationTimestamp="2026-02-17 14:17:51 +0000 UTC" firstStartedPulling="2026-02-17 14:18:21.829819144 +0000 UTC m=+728.172747413" lastFinishedPulling="2026-02-17 14:18:29.477087936 +0000 UTC m=+735.820016205" observedRunningTime="2026-02-17 14:18:29.968343624 +0000 UTC m=+736.311271913" watchObservedRunningTime="2026-02-17 14:18:29.974687349 +0000 UTC m=+736.317615639" Feb 17 14:18:30 crc kubenswrapper[4836]: I0217 14:18:30.311403 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" podStartSLOduration=29.782553461 podStartE2EDuration="38.311378454s" podCreationTimestamp="2026-02-17 14:17:52 +0000 UTC" firstStartedPulling="2026-02-17 14:18:20.920411109 +0000 UTC m=+727.263339378" lastFinishedPulling="2026-02-17 14:18:29.449236102 +0000 UTC m=+735.792164371" observedRunningTime="2026-02-17 14:18:30.304956475 +0000 UTC m=+736.647884754" watchObservedRunningTime="2026-02-17 14:18:30.311378454 +0000 UTC m=+736.654306743" Feb 17 14:18:30 crc kubenswrapper[4836]: I0217 14:18:30.352476 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr" podStartSLOduration=30.774288741 podStartE2EDuration="39.352444004s" podCreationTimestamp="2026-02-17 14:17:51 +0000 UTC" firstStartedPulling="2026-02-17 14:18:20.897977156 +0000 UTC m=+727.240905425" lastFinishedPulling="2026-02-17 14:18:29.476132419 +0000 UTC m=+735.819060688" observedRunningTime="2026-02-17 14:18:30.350120769 +0000 UTC m=+736.693049048" watchObservedRunningTime="2026-02-17 14:18:30.352444004 +0000 UTC m=+736.695372263" Feb 17 14:18:35 crc kubenswrapper[4836]: I0217 14:18:35.299224 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" event={"ID":"d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578","Type":"ContainerStarted","Data":"5824928e16ab2aba8546cfdc352c134bfff6bf88426c41f8c0c8d0e74db6324c"} Feb 17 14:18:35 crc kubenswrapper[4836]: I0217 14:18:35.299923 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:18:35 crc kubenswrapper[4836]: I0217 14:18:35.329227 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" podStartSLOduration=32.775299342 podStartE2EDuration="44.329199985s" podCreationTimestamp="2026-02-17 14:17:51 +0000 UTC" firstStartedPulling="2026-02-17 14:18:22.972191151 +0000 UTC m=+729.315119410" lastFinishedPulling="2026-02-17 14:18:34.526091784 +0000 UTC m=+740.869020053" observedRunningTime="2026-02-17 14:18:35.322957312 +0000 UTC m=+741.665885601" watchObservedRunningTime="2026-02-17 14:18:35.329199985 +0000 UTC m=+741.672128254" Feb 17 14:18:35 crc kubenswrapper[4836]: I0217 14:18:35.329599 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b474d8486-lngwm" podStartSLOduration=37.873284332 podStartE2EDuration="44.329594697s" podCreationTimestamp="2026-02-17 14:17:51 +0000 UTC" firstStartedPulling="2026-02-17 14:18:23.03803346 +0000 UTC m=+729.380961729" lastFinishedPulling="2026-02-17 14:18:29.494343805 +0000 UTC m=+735.837272094" observedRunningTime="2026-02-17 14:18:30.381009397 +0000 UTC m=+736.723937666" watchObservedRunningTime="2026-02-17 14:18:35.329594697 +0000 UTC m=+741.672522966" Feb 17 14:18:35 crc kubenswrapper[4836]: I0217 14:18:35.359374 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-f94f2" Feb 17 14:18:42 crc kubenswrapper[4836]: I0217 14:18:42.656574 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-vqhkf" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.890666 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dmddv"] Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.891945 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dmddv" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.894020 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.894770 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.896773 4836 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kw992" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.906571 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vtfx4"] Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.906883 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcgbc\" (UniqueName: \"kubernetes.io/projected/918985c6-76a8-4bb2-8868-278b633133a9-kube-api-access-gcgbc\") pod \"cert-manager-cainjector-cf98fcc89-dmddv\" (UID: \"918985c6-76a8-4bb2-8868-278b633133a9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dmddv" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.907872 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vtfx4" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.911924 4836 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nq86z" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.914553 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zhbzj"] Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.915604 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.928955 4836 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vbn52" Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.933652 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vtfx4"] Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.939858 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zhbzj"] Feb 17 14:18:44 crc kubenswrapper[4836]: I0217 14:18:44.957038 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dmddv"] Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.008167 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcgbc\" (UniqueName: \"kubernetes.io/projected/918985c6-76a8-4bb2-8868-278b633133a9-kube-api-access-gcgbc\") pod \"cert-manager-cainjector-cf98fcc89-dmddv\" (UID: \"918985c6-76a8-4bb2-8868-278b633133a9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dmddv" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.008541 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qg8\" (UniqueName: \"kubernetes.io/projected/662067b4-39c2-4ab7-adb4-ba8a6330b0b9-kube-api-access-84qg8\") pod \"cert-manager-webhook-687f57d79b-zhbzj\" (UID: \"662067b4-39c2-4ab7-adb4-ba8a6330b0b9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.008748 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc878\" (UniqueName: \"kubernetes.io/projected/63f75031-4e24-42f7-80cc-2f3fb289dac0-kube-api-access-cc878\") pod \"cert-manager-858654f9db-vtfx4\" (UID: \"63f75031-4e24-42f7-80cc-2f3fb289dac0\") " pod="cert-manager/cert-manager-858654f9db-vtfx4" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.035767 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcgbc\" (UniqueName: \"kubernetes.io/projected/918985c6-76a8-4bb2-8868-278b633133a9-kube-api-access-gcgbc\") pod \"cert-manager-cainjector-cf98fcc89-dmddv\" (UID: \"918985c6-76a8-4bb2-8868-278b633133a9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dmddv" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.110735 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc878\" (UniqueName: \"kubernetes.io/projected/63f75031-4e24-42f7-80cc-2f3fb289dac0-kube-api-access-cc878\") pod \"cert-manager-858654f9db-vtfx4\" (UID: \"63f75031-4e24-42f7-80cc-2f3fb289dac0\") " pod="cert-manager/cert-manager-858654f9db-vtfx4" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.110842 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qg8\" (UniqueName: \"kubernetes.io/projected/662067b4-39c2-4ab7-adb4-ba8a6330b0b9-kube-api-access-84qg8\") pod \"cert-manager-webhook-687f57d79b-zhbzj\" (UID: \"662067b4-39c2-4ab7-adb4-ba8a6330b0b9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.137435 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qg8\" (UniqueName: \"kubernetes.io/projected/662067b4-39c2-4ab7-adb4-ba8a6330b0b9-kube-api-access-84qg8\") pod \"cert-manager-webhook-687f57d79b-zhbzj\" (UID: \"662067b4-39c2-4ab7-adb4-ba8a6330b0b9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.137457 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc878\" (UniqueName: \"kubernetes.io/projected/63f75031-4e24-42f7-80cc-2f3fb289dac0-kube-api-access-cc878\") pod \"cert-manager-858654f9db-vtfx4\" (UID: \"63f75031-4e24-42f7-80cc-2f3fb289dac0\") " pod="cert-manager/cert-manager-858654f9db-vtfx4" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.216223 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dmddv" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.240226 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vtfx4" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.251531 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.820175 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dmddv"] Feb 17 14:18:45 crc kubenswrapper[4836]: I0217 14:18:45.958449 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zhbzj"] Feb 17 14:18:46 crc kubenswrapper[4836]: I0217 14:18:46.136042 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vtfx4"] Feb 17 14:18:46 crc kubenswrapper[4836]: W0217 14:18:46.140133 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63f75031_4e24_42f7_80cc_2f3fb289dac0.slice/crio-13fa2482bef76bef78e56e4d59c9fa10d65b9ec893b2abd432a09a9a5a29d1a7 WatchSource:0}: Error finding container 13fa2482bef76bef78e56e4d59c9fa10d65b9ec893b2abd432a09a9a5a29d1a7: Status 404 returned error can't find the container with id 13fa2482bef76bef78e56e4d59c9fa10d65b9ec893b2abd432a09a9a5a29d1a7 Feb 17 14:18:46 crc kubenswrapper[4836]: I0217 14:18:46.454527 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dmddv" event={"ID":"918985c6-76a8-4bb2-8868-278b633133a9","Type":"ContainerStarted","Data":"e77d45274e336b86af9f0403d5e2f4a1a86bd5f0902a50a6e16bbe6a9120ceb4"} Feb 17 14:18:46 crc kubenswrapper[4836]: I0217 14:18:46.455660 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" event={"ID":"662067b4-39c2-4ab7-adb4-ba8a6330b0b9","Type":"ContainerStarted","Data":"2961909ead6c585030cebc332cd7163a3d59b4a09191eb7a33f381ea3f8e925d"} Feb 17 14:18:46 crc kubenswrapper[4836]: I0217 14:18:46.456747 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vtfx4" event={"ID":"63f75031-4e24-42f7-80cc-2f3fb289dac0","Type":"ContainerStarted","Data":"13fa2482bef76bef78e56e4d59c9fa10d65b9ec893b2abd432a09a9a5a29d1a7"} Feb 17 14:18:54 crc kubenswrapper[4836]: I0217 14:18:54.281962 4836 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 14:18:54 crc kubenswrapper[4836]: I0217 14:18:54.853529 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" event={"ID":"662067b4-39c2-4ab7-adb4-ba8a6330b0b9","Type":"ContainerStarted","Data":"3e08ab673431ee790211e9556c838c1bf6f6fb7544b0d27a27043570e3c38044"} Feb 17 14:18:54 crc kubenswrapper[4836]: I0217 14:18:54.853726 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" Feb 17 14:18:54 crc kubenswrapper[4836]: I0217 14:18:54.857136 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vtfx4" event={"ID":"63f75031-4e24-42f7-80cc-2f3fb289dac0","Type":"ContainerStarted","Data":"17d85d2266dcb6ef95d13d8fc7572ab81e7ebc3e1ddeff21fbe69d2fcb5b306b"} Feb 17 14:18:54 crc kubenswrapper[4836]: I0217 14:18:54.860043 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dmddv" event={"ID":"918985c6-76a8-4bb2-8868-278b633133a9","Type":"ContainerStarted","Data":"5b7f25a98a96ce86bd9c258bb498ee41b2e57b44349490d627627adcca770733"} Feb 17 14:18:54 crc kubenswrapper[4836]: I0217 14:18:54.874679 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" podStartSLOduration=2.761455248 podStartE2EDuration="10.874659006s" podCreationTimestamp="2026-02-17 14:18:44 +0000 UTC" firstStartedPulling="2026-02-17 14:18:45.971834621 +0000 UTC m=+752.314762890" lastFinishedPulling="2026-02-17 14:18:54.085038379 +0000 UTC m=+760.427966648" observedRunningTime="2026-02-17 14:18:54.871691073 +0000 UTC m=+761.214619342" watchObservedRunningTime="2026-02-17 14:18:54.874659006 +0000 UTC m=+761.217587265" Feb 17 14:18:54 crc kubenswrapper[4836]: I0217 14:18:54.898829 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vtfx4" podStartSLOduration=2.894071183 podStartE2EDuration="10.898790466s" podCreationTimestamp="2026-02-17 14:18:44 +0000 UTC" firstStartedPulling="2026-02-17 14:18:46.143321436 +0000 UTC m=+752.486249705" lastFinishedPulling="2026-02-17 14:18:54.148040719 +0000 UTC m=+760.490968988" observedRunningTime="2026-02-17 14:18:54.88922695 +0000 UTC m=+761.232155219" watchObservedRunningTime="2026-02-17 14:18:54.898790466 +0000 UTC m=+761.241718745" Feb 17 14:18:54 crc kubenswrapper[4836]: I0217 14:18:54.920743 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dmddv" podStartSLOduration=2.67655523 podStartE2EDuration="10.920724665s" podCreationTimestamp="2026-02-17 14:18:44 +0000 UTC" firstStartedPulling="2026-02-17 14:18:45.841040198 +0000 UTC m=+752.183968467" lastFinishedPulling="2026-02-17 14:18:54.085209633 +0000 UTC m=+760.428137902" observedRunningTime="2026-02-17 14:18:54.918800491 +0000 UTC m=+761.261728760" watchObservedRunningTime="2026-02-17 14:18:54.920724665 +0000 UTC m=+761.263652944" Feb 17 14:19:00 crc kubenswrapper[4836]: I0217 14:19:00.257944 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-zhbzj" Feb 17 14:19:29 crc kubenswrapper[4836]: I0217 14:19:29.765027 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:19:29 crc kubenswrapper[4836]: I0217 14:19:29.765810 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.658159 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf"] Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.659639 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.661735 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.680508 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf"] Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.839233 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsk4\" (UniqueName: \"kubernetes.io/projected/3464477d-9902-4d40-9048-443132123fb3-kube-api-access-fvsk4\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.839372 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.839432 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.941605 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvsk4\" (UniqueName: \"kubernetes.io/projected/3464477d-9902-4d40-9048-443132123fb3-kube-api-access-fvsk4\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.942077 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.942255 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.943031 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.943156 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:31 crc kubenswrapper[4836]: I0217 14:19:31.978128 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvsk4\" (UniqueName: \"kubernetes.io/projected/3464477d-9902-4d40-9048-443132123fb3-kube-api-access-fvsk4\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:32 crc kubenswrapper[4836]: I0217 14:19:32.016716 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:32 crc kubenswrapper[4836]: I0217 14:19:32.544219 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf"] Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.133359 4836 generic.go:334] "Generic (PLEG): container finished" podID="3464477d-9902-4d40-9048-443132123fb3" containerID="65aa24d05f36a4132e36a602723d81f5d15fc397e4fc83e26cf4a6780a6cbce0" exitCode=0 Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.133415 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" event={"ID":"3464477d-9902-4d40-9048-443132123fb3","Type":"ContainerDied","Data":"65aa24d05f36a4132e36a602723d81f5d15fc397e4fc83e26cf4a6780a6cbce0"} Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.133673 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" event={"ID":"3464477d-9902-4d40-9048-443132123fb3","Type":"ContainerStarted","Data":"ad0f9defb731bc03c633adaa62a2320c1a2b5f7f64dd33ce92cf51e6d63cea84"} Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.900753 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.901695 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.903986 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.905199 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.906213 4836 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-4mnhk" Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.914516 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.942617 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lzp6h"] Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.945869 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:33 crc kubenswrapper[4836]: I0217 14:19:33.955772 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzp6h"] Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.062824 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2c45594-ebae-414b-b3cb-e8abe867c20a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2c45594-ebae-414b-b3cb-e8abe867c20a\") pod \"minio\" (UID: \"673f5440-6cd4-4341-8388-fdf924e48044\") " pod="minio-dev/minio" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.062889 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-catalog-content\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.062926 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-utilities\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.062955 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/cfed558c-2562-4771-af8e-bc422f87be49-kube-api-access-tdlcw\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.062985 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tst2s\" (UniqueName: \"kubernetes.io/projected/673f5440-6cd4-4341-8388-fdf924e48044-kube-api-access-tst2s\") pod \"minio\" (UID: \"673f5440-6cd4-4341-8388-fdf924e48044\") " pod="minio-dev/minio" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.163960 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-utilities\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.164049 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/cfed558c-2562-4771-af8e-bc422f87be49-kube-api-access-tdlcw\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.164091 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tst2s\" (UniqueName: \"kubernetes.io/projected/673f5440-6cd4-4341-8388-fdf924e48044-kube-api-access-tst2s\") pod \"minio\" (UID: \"673f5440-6cd4-4341-8388-fdf924e48044\") " pod="minio-dev/minio" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.164156 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d2c45594-ebae-414b-b3cb-e8abe867c20a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2c45594-ebae-414b-b3cb-e8abe867c20a\") pod \"minio\" (UID: \"673f5440-6cd4-4341-8388-fdf924e48044\") " pod="minio-dev/minio" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.164187 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-catalog-content\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.164579 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-catalog-content\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.164629 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-utilities\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.177028 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.177093 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d2c45594-ebae-414b-b3cb-e8abe867c20a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2c45594-ebae-414b-b3cb-e8abe867c20a\") pod \"minio\" (UID: \"673f5440-6cd4-4341-8388-fdf924e48044\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/629f4bca77debccc3b1fdd12a1a4fe57c6a22cc0f05c47e1303e3db1224caa2f/globalmount\"" pod="minio-dev/minio" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.191631 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/cfed558c-2562-4771-af8e-bc422f87be49-kube-api-access-tdlcw\") pod \"redhat-operators-lzp6h\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.196398 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tst2s\" (UniqueName: \"kubernetes.io/projected/673f5440-6cd4-4341-8388-fdf924e48044-kube-api-access-tst2s\") pod \"minio\" (UID: \"673f5440-6cd4-4341-8388-fdf924e48044\") " pod="minio-dev/minio" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.207434 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d2c45594-ebae-414b-b3cb-e8abe867c20a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2c45594-ebae-414b-b3cb-e8abe867c20a\") pod \"minio\" (UID: \"673f5440-6cd4-4341-8388-fdf924e48044\") " pod="minio-dev/minio" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.219336 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.265255 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:19:34 crc kubenswrapper[4836]: I0217 14:19:34.633130 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 14:19:34 crc kubenswrapper[4836]: W0217 14:19:34.636201 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod673f5440_6cd4_4341_8388_fdf924e48044.slice/crio-35ea741011fbc50b7e7ff703b1d8cfc72dafe3c84b464800e5c17d49f9283249 WatchSource:0}: Error finding container 35ea741011fbc50b7e7ff703b1d8cfc72dafe3c84b464800e5c17d49f9283249: Status 404 returned error can't find the container with id 35ea741011fbc50b7e7ff703b1d8cfc72dafe3c84b464800e5c17d49f9283249 Feb 17 14:19:35 crc kubenswrapper[4836]: I0217 14:19:35.009621 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzp6h"] Feb 17 14:19:35 crc kubenswrapper[4836]: W0217 14:19:35.016942 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfed558c_2562_4771_af8e_bc422f87be49.slice/crio-673a85edd931637c360a40a9b7c8e08f34e58e0982f41b7b113c561515f17ba5 WatchSource:0}: Error finding container 673a85edd931637c360a40a9b7c8e08f34e58e0982f41b7b113c561515f17ba5: Status 404 returned error can't find the container with id 673a85edd931637c360a40a9b7c8e08f34e58e0982f41b7b113c561515f17ba5 Feb 17 14:19:35 crc kubenswrapper[4836]: I0217 14:19:35.151398 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"673f5440-6cd4-4341-8388-fdf924e48044","Type":"ContainerStarted","Data":"35ea741011fbc50b7e7ff703b1d8cfc72dafe3c84b464800e5c17d49f9283249"} Feb 17 14:19:35 crc kubenswrapper[4836]: I0217 14:19:35.156224 4836 generic.go:334] "Generic (PLEG): container finished" podID="3464477d-9902-4d40-9048-443132123fb3" containerID="844e099e74476fd7a0a52853c101508eb28066193f6a0fd9d6d237ab45adab36" exitCode=0 Feb 17 14:19:35 crc kubenswrapper[4836]: I0217 14:19:35.156338 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" event={"ID":"3464477d-9902-4d40-9048-443132123fb3","Type":"ContainerDied","Data":"844e099e74476fd7a0a52853c101508eb28066193f6a0fd9d6d237ab45adab36"} Feb 17 14:19:35 crc kubenswrapper[4836]: I0217 14:19:35.158849 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzp6h" event={"ID":"cfed558c-2562-4771-af8e-bc422f87be49","Type":"ContainerStarted","Data":"673a85edd931637c360a40a9b7c8e08f34e58e0982f41b7b113c561515f17ba5"} Feb 17 14:19:36 crc kubenswrapper[4836]: I0217 14:19:36.182790 4836 generic.go:334] "Generic (PLEG): container finished" podID="3464477d-9902-4d40-9048-443132123fb3" containerID="66a2284aeac61984203666d085154cc52e97f425c50842712f125a2a2476f42e" exitCode=0 Feb 17 14:19:36 crc kubenswrapper[4836]: I0217 14:19:36.183669 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" event={"ID":"3464477d-9902-4d40-9048-443132123fb3","Type":"ContainerDied","Data":"66a2284aeac61984203666d085154cc52e97f425c50842712f125a2a2476f42e"} Feb 17 14:19:36 crc kubenswrapper[4836]: I0217 14:19:36.187998 4836 generic.go:334] "Generic (PLEG): container finished" podID="cfed558c-2562-4771-af8e-bc422f87be49" containerID="d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce" exitCode=0 Feb 17 14:19:36 crc kubenswrapper[4836]: I0217 14:19:36.188076 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzp6h" event={"ID":"cfed558c-2562-4771-af8e-bc422f87be49","Type":"ContainerDied","Data":"d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce"} Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.379313 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.483560 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-bundle\") pod \"3464477d-9902-4d40-9048-443132123fb3\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.483618 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-util\") pod \"3464477d-9902-4d40-9048-443132123fb3\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.483686 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvsk4\" (UniqueName: \"kubernetes.io/projected/3464477d-9902-4d40-9048-443132123fb3-kube-api-access-fvsk4\") pod \"3464477d-9902-4d40-9048-443132123fb3\" (UID: \"3464477d-9902-4d40-9048-443132123fb3\") " Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.485872 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-bundle" (OuterVolumeSpecName: "bundle") pod "3464477d-9902-4d40-9048-443132123fb3" (UID: "3464477d-9902-4d40-9048-443132123fb3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.495798 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3464477d-9902-4d40-9048-443132123fb3-kube-api-access-fvsk4" (OuterVolumeSpecName: "kube-api-access-fvsk4") pod "3464477d-9902-4d40-9048-443132123fb3" (UID: "3464477d-9902-4d40-9048-443132123fb3"). InnerVolumeSpecName "kube-api-access-fvsk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.585119 4836 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.585159 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvsk4\" (UniqueName: \"kubernetes.io/projected/3464477d-9902-4d40-9048-443132123fb3-kube-api-access-fvsk4\") on node \"crc\" DevicePath \"\"" Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.595773 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-util" (OuterVolumeSpecName: "util") pod "3464477d-9902-4d40-9048-443132123fb3" (UID: "3464477d-9902-4d40-9048-443132123fb3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:19:38 crc kubenswrapper[4836]: I0217 14:19:38.686565 4836 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3464477d-9902-4d40-9048-443132123fb3-util\") on node \"crc\" DevicePath \"\"" Feb 17 14:19:39 crc kubenswrapper[4836]: I0217 14:19:39.228038 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" event={"ID":"3464477d-9902-4d40-9048-443132123fb3","Type":"ContainerDied","Data":"ad0f9defb731bc03c633adaa62a2320c1a2b5f7f64dd33ce92cf51e6d63cea84"} Feb 17 14:19:39 crc kubenswrapper[4836]: I0217 14:19:39.228116 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0f9defb731bc03c633adaa62a2320c1a2b5f7f64dd33ce92cf51e6d63cea84" Feb 17 14:19:39 crc kubenswrapper[4836]: I0217 14:19:39.228251 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf" Feb 17 14:19:42 crc kubenswrapper[4836]: I0217 14:19:42.477742 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzp6h" event={"ID":"cfed558c-2562-4771-af8e-bc422f87be49","Type":"ContainerStarted","Data":"d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238"} Feb 17 14:19:42 crc kubenswrapper[4836]: I0217 14:19:42.496908 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"673f5440-6cd4-4341-8388-fdf924e48044","Type":"ContainerStarted","Data":"2f9bf953667ce4e254bcd6e56c58f905f8da40fdf5143fc07bd7d53964847e14"} Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.520281 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=10.110343819 podStartE2EDuration="17.520254933s" podCreationTimestamp="2026-02-17 14:19:31 +0000 UTC" firstStartedPulling="2026-02-17 14:19:34.63907564 +0000 UTC m=+800.982003909" lastFinishedPulling="2026-02-17 14:19:42.048986754 +0000 UTC m=+808.391915023" observedRunningTime="2026-02-17 14:19:42.925159465 +0000 UTC m=+809.268087744" watchObservedRunningTime="2026-02-17 14:19:48.520254933 +0000 UTC m=+814.863183202" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.524080 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7"] Feb 17 14:19:48 crc kubenswrapper[4836]: E0217 14:19:48.524439 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3464477d-9902-4d40-9048-443132123fb3" containerName="util" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.524463 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="3464477d-9902-4d40-9048-443132123fb3" containerName="util" Feb 17 14:19:48 crc kubenswrapper[4836]: E0217 14:19:48.524485 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3464477d-9902-4d40-9048-443132123fb3" containerName="pull" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.524498 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="3464477d-9902-4d40-9048-443132123fb3" containerName="pull" Feb 17 14:19:48 crc kubenswrapper[4836]: E0217 14:19:48.524514 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3464477d-9902-4d40-9048-443132123fb3" containerName="extract" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.524528 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="3464477d-9902-4d40-9048-443132123fb3" containerName="extract" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.524691 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="3464477d-9902-4d40-9048-443132123fb3" containerName="extract" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.525843 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.528187 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b54rn\" (UniqueName: \"kubernetes.io/projected/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-kube-api-access-b54rn\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.528447 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.528544 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-apiservice-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.528666 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-manager-config\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.528724 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-webhook-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.531402 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.532352 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.532403 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.533463 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.534961 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.536996 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-xqgbn" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.630485 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b54rn\" (UniqueName: \"kubernetes.io/projected/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-kube-api-access-b54rn\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.630741 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.630837 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-apiservice-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.630911 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-manager-config\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.630953 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-webhook-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.633002 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-manager-config\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.726995 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.733422 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7"] Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.740730 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-webhook-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.744551 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-apiservice-cert\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.782148 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b54rn\" (UniqueName: \"kubernetes.io/projected/297a6b35-d11d-4c2b-858c-79cb4c3c1b2c-kube-api-access-b54rn\") pod \"loki-operator-controller-manager-dfd4b8c4b-kclf7\" (UID: \"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:48 crc kubenswrapper[4836]: I0217 14:19:48.848853 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:19:51 crc kubenswrapper[4836]: I0217 14:19:51.049013 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7"] Feb 17 14:19:51 crc kubenswrapper[4836]: I0217 14:19:51.072033 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" event={"ID":"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c","Type":"ContainerStarted","Data":"6b58103ebb3f82fe763cafe563d382e1d62fa5efd2795d748f78f78246f2b428"} Feb 17 14:19:53 crc kubenswrapper[4836]: I0217 14:19:53.283469 4836 generic.go:334] "Generic (PLEG): container finished" podID="cfed558c-2562-4771-af8e-bc422f87be49" containerID="d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238" exitCode=0 Feb 17 14:19:53 crc kubenswrapper[4836]: I0217 14:19:53.283865 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzp6h" event={"ID":"cfed558c-2562-4771-af8e-bc422f87be49","Type":"ContainerDied","Data":"d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238"} Feb 17 14:19:55 crc kubenswrapper[4836]: I0217 14:19:55.313848 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzp6h" event={"ID":"cfed558c-2562-4771-af8e-bc422f87be49","Type":"ContainerStarted","Data":"a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69"} Feb 17 14:19:55 crc kubenswrapper[4836]: I0217 14:19:55.346989 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lzp6h" podStartSLOduration=5.886289972 podStartE2EDuration="22.346960575s" podCreationTimestamp="2026-02-17 14:19:33 +0000 UTC" firstStartedPulling="2026-02-17 14:19:37.692717493 +0000 UTC m=+804.035645762" lastFinishedPulling="2026-02-17 14:19:54.153388096 +0000 UTC m=+820.496316365" observedRunningTime="2026-02-17 14:19:55.34136655 +0000 UTC m=+821.684294819" watchObservedRunningTime="2026-02-17 14:19:55.346960575 +0000 UTC m=+821.689888864" Feb 17 14:19:59 crc kubenswrapper[4836]: I0217 14:19:59.766600 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:19:59 crc kubenswrapper[4836]: I0217 14:19:59.767636 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:20:03 crc kubenswrapper[4836]: I0217 14:20:03.457080 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" event={"ID":"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c","Type":"ContainerStarted","Data":"ebabf52e45b1432c2f537d383dbd701657289b74835ccf4e71171e80022c892b"} Feb 17 14:20:04 crc kubenswrapper[4836]: I0217 14:20:04.266276 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:20:04 crc kubenswrapper[4836]: I0217 14:20:04.266343 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:20:05 crc kubenswrapper[4836]: I0217 14:20:05.322107 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lzp6h" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="registry-server" probeResult="failure" output=< Feb 17 14:20:05 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:20:05 crc kubenswrapper[4836]: > Feb 17 14:20:13 crc kubenswrapper[4836]: I0217 14:20:13.651475 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" event={"ID":"297a6b35-d11d-4c2b-858c-79cb4c3c1b2c","Type":"ContainerStarted","Data":"9f3f299bf99b01623cba9bbf6b343240ae7d86b324fbd64168c45c1bf7eea652"} Feb 17 14:20:13 crc kubenswrapper[4836]: I0217 14:20:13.652847 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:20:13 crc kubenswrapper[4836]: I0217 14:20:13.655475 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" Feb 17 14:20:13 crc kubenswrapper[4836]: I0217 14:20:13.698154 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-dfd4b8c4b-kclf7" podStartSLOduration=3.530038009 podStartE2EDuration="25.698120548s" podCreationTimestamp="2026-02-17 14:19:48 +0000 UTC" firstStartedPulling="2026-02-17 14:19:51.05927638 +0000 UTC m=+817.402204649" lastFinishedPulling="2026-02-17 14:20:13.227358919 +0000 UTC m=+839.570287188" observedRunningTime="2026-02-17 14:20:13.690227699 +0000 UTC m=+840.033155968" watchObservedRunningTime="2026-02-17 14:20:13.698120548 +0000 UTC m=+840.041048817" Feb 17 14:20:14 crc kubenswrapper[4836]: I0217 14:20:14.324692 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:20:14 crc kubenswrapper[4836]: I0217 14:20:14.366039 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:20:16 crc kubenswrapper[4836]: I0217 14:20:16.728905 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzp6h"] Feb 17 14:20:16 crc kubenswrapper[4836]: I0217 14:20:16.729637 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lzp6h" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="registry-server" containerID="cri-o://a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69" gracePeriod=2 Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.197719 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.355695 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-catalog-content\") pod \"cfed558c-2562-4771-af8e-bc422f87be49\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.355887 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/cfed558c-2562-4771-af8e-bc422f87be49-kube-api-access-tdlcw\") pod \"cfed558c-2562-4771-af8e-bc422f87be49\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.355978 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-utilities\") pod \"cfed558c-2562-4771-af8e-bc422f87be49\" (UID: \"cfed558c-2562-4771-af8e-bc422f87be49\") " Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.356985 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-utilities" (OuterVolumeSpecName: "utilities") pod "cfed558c-2562-4771-af8e-bc422f87be49" (UID: "cfed558c-2562-4771-af8e-bc422f87be49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.362515 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfed558c-2562-4771-af8e-bc422f87be49-kube-api-access-tdlcw" (OuterVolumeSpecName: "kube-api-access-tdlcw") pod "cfed558c-2562-4771-af8e-bc422f87be49" (UID: "cfed558c-2562-4771-af8e-bc422f87be49"). InnerVolumeSpecName "kube-api-access-tdlcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.457364 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/cfed558c-2562-4771-af8e-bc422f87be49-kube-api-access-tdlcw\") on node \"crc\" DevicePath \"\"" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.457401 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.486427 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfed558c-2562-4771-af8e-bc422f87be49" (UID: "cfed558c-2562-4771-af8e-bc422f87be49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.561011 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfed558c-2562-4771-af8e-bc422f87be49-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.753053 4836 generic.go:334] "Generic (PLEG): container finished" podID="cfed558c-2562-4771-af8e-bc422f87be49" containerID="a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69" exitCode=0 Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.753125 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzp6h" event={"ID":"cfed558c-2562-4771-af8e-bc422f87be49","Type":"ContainerDied","Data":"a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69"} Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.753180 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzp6h" event={"ID":"cfed558c-2562-4771-af8e-bc422f87be49","Type":"ContainerDied","Data":"673a85edd931637c360a40a9b7c8e08f34e58e0982f41b7b113c561515f17ba5"} Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.753179 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzp6h" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.753203 4836 scope.go:117] "RemoveContainer" containerID="a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.783111 4836 scope.go:117] "RemoveContainer" containerID="d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.794373 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzp6h"] Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.798729 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lzp6h"] Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.809415 4836 scope.go:117] "RemoveContainer" containerID="d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.828899 4836 scope.go:117] "RemoveContainer" containerID="a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69" Feb 17 14:20:17 crc kubenswrapper[4836]: E0217 14:20:17.829640 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69\": container with ID starting with a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69 not found: ID does not exist" containerID="a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.829719 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69"} err="failed to get container status \"a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69\": rpc error: code = NotFound desc = could not find container \"a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69\": container with ID starting with a270a09a390ebcea0a4f509773de7bae62620250eaf8884cedb160dbb4315a69 not found: ID does not exist" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.829762 4836 scope.go:117] "RemoveContainer" containerID="d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238" Feb 17 14:20:17 crc kubenswrapper[4836]: E0217 14:20:17.830343 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238\": container with ID starting with d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238 not found: ID does not exist" containerID="d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.830449 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238"} err="failed to get container status \"d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238\": rpc error: code = NotFound desc = could not find container \"d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238\": container with ID starting with d2cbaf947c2e327430e0f53838c199bd1b517cad7c1f8cc6fa7d59ca6b872238 not found: ID does not exist" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.830534 4836 scope.go:117] "RemoveContainer" containerID="d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce" Feb 17 14:20:17 crc kubenswrapper[4836]: E0217 14:20:17.831214 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce\": container with ID starting with d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce not found: ID does not exist" containerID="d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce" Feb 17 14:20:17 crc kubenswrapper[4836]: I0217 14:20:17.831310 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce"} err="failed to get container status \"d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce\": rpc error: code = NotFound desc = could not find container \"d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce\": container with ID starting with d4dfb4a28991a2ad7ad1d97f7e9b6acb9b352c07cce3ac2497280bb1a45e40ce not found: ID does not exist" Feb 17 14:20:18 crc kubenswrapper[4836]: I0217 14:20:18.581838 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfed558c-2562-4771-af8e-bc422f87be49" path="/var/lib/kubelet/pods/cfed558c-2562-4771-af8e-bc422f87be49/volumes" Feb 17 14:20:29 crc kubenswrapper[4836]: I0217 14:20:29.764877 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:20:29 crc kubenswrapper[4836]: I0217 14:20:29.765931 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:20:29 crc kubenswrapper[4836]: I0217 14:20:29.765999 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:20:29 crc kubenswrapper[4836]: I0217 14:20:29.832393 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7f43ee4be167fb696d056804834f76d74b6a96b2dd00fc7f1328e7b9c2e7869"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:20:29 crc kubenswrapper[4836]: I0217 14:20:29.832504 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://d7f43ee4be167fb696d056804834f76d74b6a96b2dd00fc7f1328e7b9c2e7869" gracePeriod=600 Feb 17 14:20:30 crc kubenswrapper[4836]: I0217 14:20:30.841899 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="d7f43ee4be167fb696d056804834f76d74b6a96b2dd00fc7f1328e7b9c2e7869" exitCode=0 Feb 17 14:20:30 crc kubenswrapper[4836]: I0217 14:20:30.841979 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"d7f43ee4be167fb696d056804834f76d74b6a96b2dd00fc7f1328e7b9c2e7869"} Feb 17 14:20:30 crc kubenswrapper[4836]: I0217 14:20:30.842643 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"89b78e4cc2264dc06417ab903dd2a1618c1aee2c1d950babae0b011a2e9eac59"} Feb 17 14:20:30 crc kubenswrapper[4836]: I0217 14:20:30.842668 4836 scope.go:117] "RemoveContainer" containerID="1b2a0d64ec4a5faa95e6312a8de2b21c8f3e85f4d851c39760904a4b16753249" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.008080 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz"] Feb 17 14:20:40 crc kubenswrapper[4836]: E0217 14:20:40.008889 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="extract-utilities" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.008906 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="extract-utilities" Feb 17 14:20:40 crc kubenswrapper[4836]: E0217 14:20:40.008924 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="registry-server" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.008931 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="registry-server" Feb 17 14:20:40 crc kubenswrapper[4836]: E0217 14:20:40.008946 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="extract-content" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.008952 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="extract-content" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.009062 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfed558c-2562-4771-af8e-bc422f87be49" containerName="registry-server" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.023634 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.031912 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.055420 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz"] Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.082825 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.082868 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.082890 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvl4\" (UniqueName: \"kubernetes.io/projected/96be2236-f07d-4944-8afa-b15a4ce0c4f0-kube-api-access-ckvl4\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.183877 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.184271 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.184317 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckvl4\" (UniqueName: \"kubernetes.io/projected/96be2236-f07d-4944-8afa-b15a4ce0c4f0-kube-api-access-ckvl4\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.184517 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.184810 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.223424 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckvl4\" (UniqueName: \"kubernetes.io/projected/96be2236-f07d-4944-8afa-b15a4ce0c4f0-kube-api-access-ckvl4\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.348422 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.611991 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz"] Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.929622 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" event={"ID":"96be2236-f07d-4944-8afa-b15a4ce0c4f0","Type":"ContainerStarted","Data":"8235b28307ff2660a2209b952fd70b03df5c5d3ae9afbc6f8b22818710d07e80"} Feb 17 14:20:40 crc kubenswrapper[4836]: I0217 14:20:40.929758 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" event={"ID":"96be2236-f07d-4944-8afa-b15a4ce0c4f0","Type":"ContainerStarted","Data":"ab5298b8b361b42235544a670b8b6569354940dd46e82c73afc20041f4e0b413"} Feb 17 14:20:41 crc kubenswrapper[4836]: I0217 14:20:41.946633 4836 generic.go:334] "Generic (PLEG): container finished" podID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerID="8235b28307ff2660a2209b952fd70b03df5c5d3ae9afbc6f8b22818710d07e80" exitCode=0 Feb 17 14:20:41 crc kubenswrapper[4836]: I0217 14:20:41.946710 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" event={"ID":"96be2236-f07d-4944-8afa-b15a4ce0c4f0","Type":"ContainerDied","Data":"8235b28307ff2660a2209b952fd70b03df5c5d3ae9afbc6f8b22818710d07e80"} Feb 17 14:20:44 crc kubenswrapper[4836]: I0217 14:20:44.988995 4836 generic.go:334] "Generic (PLEG): container finished" podID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerID="b63be5ecf4e404a8432eedfca2301d5ed2af1b2db76aacecbb2911a40d6711fe" exitCode=0 Feb 17 14:20:44 crc kubenswrapper[4836]: I0217 14:20:44.989058 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" event={"ID":"96be2236-f07d-4944-8afa-b15a4ce0c4f0","Type":"ContainerDied","Data":"b63be5ecf4e404a8432eedfca2301d5ed2af1b2db76aacecbb2911a40d6711fe"} Feb 17 14:20:45 crc kubenswrapper[4836]: I0217 14:20:45.996413 4836 generic.go:334] "Generic (PLEG): container finished" podID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerID="83ef15b1d271b620ffa7952a5e4be567e8e2a86b23ca0b2ee1ba0731de7e4453" exitCode=0 Feb 17 14:20:45 crc kubenswrapper[4836]: I0217 14:20:45.996559 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" event={"ID":"96be2236-f07d-4944-8afa-b15a4ce0c4f0","Type":"ContainerDied","Data":"83ef15b1d271b620ffa7952a5e4be567e8e2a86b23ca0b2ee1ba0731de7e4453"} Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.260640 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.422353 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckvl4\" (UniqueName: \"kubernetes.io/projected/96be2236-f07d-4944-8afa-b15a4ce0c4f0-kube-api-access-ckvl4\") pod \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.422607 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-bundle\") pod \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.423348 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-bundle" (OuterVolumeSpecName: "bundle") pod "96be2236-f07d-4944-8afa-b15a4ce0c4f0" (UID: "96be2236-f07d-4944-8afa-b15a4ce0c4f0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.423401 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-util\") pod \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\" (UID: \"96be2236-f07d-4944-8afa-b15a4ce0c4f0\") " Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.423946 4836 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.433203 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96be2236-f07d-4944-8afa-b15a4ce0c4f0-kube-api-access-ckvl4" (OuterVolumeSpecName: "kube-api-access-ckvl4") pod "96be2236-f07d-4944-8afa-b15a4ce0c4f0" (UID: "96be2236-f07d-4944-8afa-b15a4ce0c4f0"). InnerVolumeSpecName "kube-api-access-ckvl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.435981 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-util" (OuterVolumeSpecName: "util") pod "96be2236-f07d-4944-8afa-b15a4ce0c4f0" (UID: "96be2236-f07d-4944-8afa-b15a4ce0c4f0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.524984 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckvl4\" (UniqueName: \"kubernetes.io/projected/96be2236-f07d-4944-8afa-b15a4ce0c4f0-kube-api-access-ckvl4\") on node \"crc\" DevicePath \"\"" Feb 17 14:20:47 crc kubenswrapper[4836]: I0217 14:20:47.525036 4836 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96be2236-f07d-4944-8afa-b15a4ce0c4f0-util\") on node \"crc\" DevicePath \"\"" Feb 17 14:20:48 crc kubenswrapper[4836]: I0217 14:20:48.014164 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" event={"ID":"96be2236-f07d-4944-8afa-b15a4ce0c4f0","Type":"ContainerDied","Data":"ab5298b8b361b42235544a670b8b6569354940dd46e82c73afc20041f4e0b413"} Feb 17 14:20:48 crc kubenswrapper[4836]: I0217 14:20:48.014255 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz" Feb 17 14:20:48 crc kubenswrapper[4836]: I0217 14:20:48.014233 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab5298b8b361b42235544a670b8b6569354940dd46e82c73afc20041f4e0b413" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.665419 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9w75g"] Feb 17 14:20:51 crc kubenswrapper[4836]: E0217 14:20:51.666095 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerName="extract" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.666111 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerName="extract" Feb 17 14:20:51 crc kubenswrapper[4836]: E0217 14:20:51.666130 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerName="util" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.666139 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerName="util" Feb 17 14:20:51 crc kubenswrapper[4836]: E0217 14:20:51.666152 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerName="pull" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.666161 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerName="pull" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.666357 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="96be2236-f07d-4944-8afa-b15a4ce0c4f0" containerName="extract" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.666955 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-9w75g" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.669530 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.669812 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.672250 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4qq5m" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.679470 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9w75g"] Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.695880 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjqkq\" (UniqueName: \"kubernetes.io/projected/c190e38d-4893-49c9-a633-e6b912030d37-kube-api-access-vjqkq\") pod \"nmstate-operator-694c9596b7-9w75g\" (UID: \"c190e38d-4893-49c9-a633-e6b912030d37\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9w75g" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.796880 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjqkq\" (UniqueName: \"kubernetes.io/projected/c190e38d-4893-49c9-a633-e6b912030d37-kube-api-access-vjqkq\") pod \"nmstate-operator-694c9596b7-9w75g\" (UID: \"c190e38d-4893-49c9-a633-e6b912030d37\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9w75g" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.814445 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjqkq\" (UniqueName: \"kubernetes.io/projected/c190e38d-4893-49c9-a633-e6b912030d37-kube-api-access-vjqkq\") pod \"nmstate-operator-694c9596b7-9w75g\" (UID: \"c190e38d-4893-49c9-a633-e6b912030d37\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9w75g" Feb 17 14:20:51 crc kubenswrapper[4836]: I0217 14:20:51.986103 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-9w75g" Feb 17 14:20:52 crc kubenswrapper[4836]: I0217 14:20:52.188135 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9w75g"] Feb 17 14:20:53 crc kubenswrapper[4836]: I0217 14:20:53.047868 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-9w75g" event={"ID":"c190e38d-4893-49c9-a633-e6b912030d37","Type":"ContainerStarted","Data":"67a57c72c9e94ba262c5b325a5d69a76019472bfac6c3846c7ace76d4e46915a"} Feb 17 14:20:55 crc kubenswrapper[4836]: I0217 14:20:55.062061 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-9w75g" event={"ID":"c190e38d-4893-49c9-a633-e6b912030d37","Type":"ContainerStarted","Data":"1495efa1ef582fd1a8b215e602903b75c391e0d227af75075e32e473efba5e9b"} Feb 17 14:20:55 crc kubenswrapper[4836]: I0217 14:20:55.081480 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-9w75g" podStartSLOduration=1.804008023 podStartE2EDuration="4.081425664s" podCreationTimestamp="2026-02-17 14:20:51 +0000 UTC" firstStartedPulling="2026-02-17 14:20:52.203783008 +0000 UTC m=+878.546711267" lastFinishedPulling="2026-02-17 14:20:54.481200639 +0000 UTC m=+880.824128908" observedRunningTime="2026-02-17 14:20:55.07803904 +0000 UTC m=+881.420967319" watchObservedRunningTime="2026-02-17 14:20:55.081425664 +0000 UTC m=+881.424353953" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.193181 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-877xf"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.195357 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.204526 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2n6qm" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.219456 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.220766 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.229740 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-877xf"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.233640 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.283491 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.301635 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-w8wbg"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.302412 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.334682 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-52vj8\" (UID: \"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.334999 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84vsn\" (UniqueName: \"kubernetes.io/projected/0d0615b5-ef3b-4932-957c-a4b44f35c1a9-kube-api-access-84vsn\") pod \"nmstate-metrics-58c85c668d-877xf\" (UID: \"0d0615b5-ef3b-4932-957c-a4b44f35c1a9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.335184 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grx5q\" (UniqueName: \"kubernetes.io/projected/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-kube-api-access-grx5q\") pod \"nmstate-webhook-866bcb46dc-52vj8\" (UID: \"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.438500 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-ovs-socket\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.438568 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz49c\" (UniqueName: \"kubernetes.io/projected/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-kube-api-access-gz49c\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.438638 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grx5q\" (UniqueName: \"kubernetes.io/projected/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-kube-api-access-grx5q\") pod \"nmstate-webhook-866bcb46dc-52vj8\" (UID: \"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.438678 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-nmstate-lock\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.438718 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-52vj8\" (UID: \"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.438755 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-dbus-socket\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.438796 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84vsn\" (UniqueName: \"kubernetes.io/projected/0d0615b5-ef3b-4932-957c-a4b44f35c1a9-kube-api-access-84vsn\") pod \"nmstate-metrics-58c85c668d-877xf\" (UID: \"0d0615b5-ef3b-4932-957c-a4b44f35c1a9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" Feb 17 14:21:01 crc kubenswrapper[4836]: E0217 14:21:01.439397 4836 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 14:21:01 crc kubenswrapper[4836]: E0217 14:21:01.439475 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-tls-key-pair podName:6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8 nodeName:}" failed. No retries permitted until 2026-02-17 14:21:01.93945596 +0000 UTC m=+888.282384229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-tls-key-pair") pod "nmstate-webhook-866bcb46dc-52vj8" (UID: "6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8") : secret "openshift-nmstate-webhook" not found Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.486930 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84vsn\" (UniqueName: \"kubernetes.io/projected/0d0615b5-ef3b-4932-957c-a4b44f35c1a9-kube-api-access-84vsn\") pod \"nmstate-metrics-58c85c668d-877xf\" (UID: \"0d0615b5-ef3b-4932-957c-a4b44f35c1a9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.489493 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grx5q\" (UniqueName: \"kubernetes.io/projected/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-kube-api-access-grx5q\") pod \"nmstate-webhook-866bcb46dc-52vj8\" (UID: \"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.522429 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.537281 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.540091 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz49c\" (UniqueName: \"kubernetes.io/projected/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-kube-api-access-gz49c\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.540280 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-nmstate-lock\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.540433 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-dbus-socket\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.540537 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-ovs-socket\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.540853 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.541448 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-nmstate-lock\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.541607 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-dbus-socket\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.541625 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-ovs-socket\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.540946 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.540997 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-j25cv" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.552227 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.552828 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.573362 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz49c\" (UniqueName: \"kubernetes.io/projected/9ff842c9-08b8-4363-b82a-5f7e2461ec2a-kube-api-access-gz49c\") pod \"nmstate-handler-w8wbg\" (UID: \"9ff842c9-08b8-4363-b82a-5f7e2461ec2a\") " pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.636701 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.644331 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2kzb\" (UniqueName: \"kubernetes.io/projected/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-kube-api-access-w2kzb\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.644405 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.644422 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: W0217 14:21:01.683050 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff842c9_08b8_4363_b82a_5f7e2461ec2a.slice/crio-e28585bf6f19fb3108b832e84d25055b3a9b9067d38607b8e5943b281573e17e WatchSource:0}: Error finding container e28585bf6f19fb3108b832e84d25055b3a9b9067d38607b8e5943b281573e17e: Status 404 returned error can't find the container with id e28585bf6f19fb3108b832e84d25055b3a9b9067d38607b8e5943b281573e17e Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.745389 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2kzb\" (UniqueName: \"kubernetes.io/projected/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-kube-api-access-w2kzb\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.745455 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.745479 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.751101 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b844687d4-4gf5j"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.751964 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.755721 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.786383 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.798895 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2kzb\" (UniqueName: \"kubernetes.io/projected/8fc6d41c-a8a1-4fe3-ade2-b79761920b17-kube-api-access-w2kzb\") pod \"nmstate-console-plugin-5c78fc5d65-q985f\" (UID: \"8fc6d41c-a8a1-4fe3-ade2-b79761920b17\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.823114 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b844687d4-4gf5j"] Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.848363 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-config\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.848443 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-oauth-serving-cert\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.848473 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-oauth-config\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.848507 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-trusted-ca-bundle\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.848535 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frxf\" (UniqueName: \"kubernetes.io/projected/a994c152-32cc-448d-a7f7-099bd60fb8d9-kube-api-access-8frxf\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.848574 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-serving-cert\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.848606 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-service-ca\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.853882 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.920854 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-877xf"] Feb 17 14:21:01 crc kubenswrapper[4836]: W0217 14:21:01.922639 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d0615b5_ef3b_4932_957c_a4b44f35c1a9.slice/crio-8f217c70f44750112f961da5e87d3955d3c432341620aa0dc51f2e75d18c16e3 WatchSource:0}: Error finding container 8f217c70f44750112f961da5e87d3955d3c432341620aa0dc51f2e75d18c16e3: Status 404 returned error can't find the container with id 8f217c70f44750112f961da5e87d3955d3c432341620aa0dc51f2e75d18c16e3 Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.950274 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-trusted-ca-bundle\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.950366 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8frxf\" (UniqueName: \"kubernetes.io/projected/a994c152-32cc-448d-a7f7-099bd60fb8d9-kube-api-access-8frxf\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.950409 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-serving-cert\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.950444 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-service-ca\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.950484 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-52vj8\" (UID: \"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.950546 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-config\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.950594 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-oauth-serving-cert\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.950618 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-oauth-config\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.951830 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-trusted-ca-bundle\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.951981 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-service-ca\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.952604 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-config\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.953226 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a994c152-32cc-448d-a7f7-099bd60fb8d9-oauth-serving-cert\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.955597 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-oauth-config\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.956460 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a994c152-32cc-448d-a7f7-099bd60fb8d9-console-serving-cert\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.957630 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-52vj8\" (UID: \"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:01 crc kubenswrapper[4836]: I0217 14:21:01.971413 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8frxf\" (UniqueName: \"kubernetes.io/projected/a994c152-32cc-448d-a7f7-099bd60fb8d9-kube-api-access-8frxf\") pod \"console-b844687d4-4gf5j\" (UID: \"a994c152-32cc-448d-a7f7-099bd60fb8d9\") " pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:02 crc kubenswrapper[4836]: I0217 14:21:02.085679 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f"] Feb 17 14:21:02 crc kubenswrapper[4836]: W0217 14:21:02.089820 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc6d41c_a8a1_4fe3_ade2_b79761920b17.slice/crio-516c8bbfd65d5225a7defe5ace76faaddad1d69a7c69913806b98a6dde228d0e WatchSource:0}: Error finding container 516c8bbfd65d5225a7defe5ace76faaddad1d69a7c69913806b98a6dde228d0e: Status 404 returned error can't find the container with id 516c8bbfd65d5225a7defe5ace76faaddad1d69a7c69913806b98a6dde228d0e Feb 17 14:21:02 crc kubenswrapper[4836]: I0217 14:21:02.106339 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" event={"ID":"0d0615b5-ef3b-4932-957c-a4b44f35c1a9","Type":"ContainerStarted","Data":"8f217c70f44750112f961da5e87d3955d3c432341620aa0dc51f2e75d18c16e3"} Feb 17 14:21:02 crc kubenswrapper[4836]: I0217 14:21:02.107355 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" event={"ID":"8fc6d41c-a8a1-4fe3-ade2-b79761920b17","Type":"ContainerStarted","Data":"516c8bbfd65d5225a7defe5ace76faaddad1d69a7c69913806b98a6dde228d0e"} Feb 17 14:21:02 crc kubenswrapper[4836]: I0217 14:21:02.108051 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-w8wbg" event={"ID":"9ff842c9-08b8-4363-b82a-5f7e2461ec2a","Type":"ContainerStarted","Data":"e28585bf6f19fb3108b832e84d25055b3a9b9067d38607b8e5943b281573e17e"} Feb 17 14:21:02 crc kubenswrapper[4836]: I0217 14:21:02.109577 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:02 crc kubenswrapper[4836]: I0217 14:21:02.168409 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:02 crc kubenswrapper[4836]: I0217 14:21:02.329681 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b844687d4-4gf5j"] Feb 17 14:21:02 crc kubenswrapper[4836]: W0217 14:21:02.339607 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda994c152_32cc_448d_a7f7_099bd60fb8d9.slice/crio-4a2749211e3b0287b4eb0c905a1801e479c9610291f3ba45bfe6eeb8d5212844 WatchSource:0}: Error finding container 4a2749211e3b0287b4eb0c905a1801e479c9610291f3ba45bfe6eeb8d5212844: Status 404 returned error can't find the container with id 4a2749211e3b0287b4eb0c905a1801e479c9610291f3ba45bfe6eeb8d5212844 Feb 17 14:21:02 crc kubenswrapper[4836]: I0217 14:21:02.418463 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8"] Feb 17 14:21:02 crc kubenswrapper[4836]: W0217 14:21:02.439698 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d6a6ca4_12c5_4bc1_b67e_5a48d1fe86f8.slice/crio-433cba2ed4ba704cd9599b2ca4047f781885b90b3de1ec31939f76d4b7d65f11 WatchSource:0}: Error finding container 433cba2ed4ba704cd9599b2ca4047f781885b90b3de1ec31939f76d4b7d65f11: Status 404 returned error can't find the container with id 433cba2ed4ba704cd9599b2ca4047f781885b90b3de1ec31939f76d4b7d65f11 Feb 17 14:21:03 crc kubenswrapper[4836]: I0217 14:21:03.116823 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" event={"ID":"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8","Type":"ContainerStarted","Data":"433cba2ed4ba704cd9599b2ca4047f781885b90b3de1ec31939f76d4b7d65f11"} Feb 17 14:21:03 crc kubenswrapper[4836]: I0217 14:21:03.118796 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b844687d4-4gf5j" event={"ID":"a994c152-32cc-448d-a7f7-099bd60fb8d9","Type":"ContainerStarted","Data":"729b5bfd9fe518d7af30813213189948586fc2a39921928919b8098327fedc0c"} Feb 17 14:21:03 crc kubenswrapper[4836]: I0217 14:21:03.118827 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b844687d4-4gf5j" event={"ID":"a994c152-32cc-448d-a7f7-099bd60fb8d9","Type":"ContainerStarted","Data":"4a2749211e3b0287b4eb0c905a1801e479c9610291f3ba45bfe6eeb8d5212844"} Feb 17 14:21:03 crc kubenswrapper[4836]: I0217 14:21:03.137086 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b844687d4-4gf5j" podStartSLOduration=2.137065423 podStartE2EDuration="2.137065423s" podCreationTimestamp="2026-02-17 14:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:21:03.13583998 +0000 UTC m=+889.478768249" watchObservedRunningTime="2026-02-17 14:21:03.137065423 +0000 UTC m=+889.479993692" Feb 17 14:21:05 crc kubenswrapper[4836]: I0217 14:21:05.155088 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-w8wbg" event={"ID":"9ff842c9-08b8-4363-b82a-5f7e2461ec2a","Type":"ContainerStarted","Data":"e91294bcf50ad5ea50a8a24d08c1f117b383b99f73cfa3dcaaee8cb047cd56b3"} Feb 17 14:21:05 crc kubenswrapper[4836]: I0217 14:21:05.155701 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:05 crc kubenswrapper[4836]: I0217 14:21:05.157250 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" event={"ID":"6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8","Type":"ContainerStarted","Data":"4eacf308cad48168c73e5827af9a5fa4a128251d6957a66cdff72a3a21be9592"} Feb 17 14:21:05 crc kubenswrapper[4836]: I0217 14:21:05.157392 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:05 crc kubenswrapper[4836]: I0217 14:21:05.160307 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" event={"ID":"0d0615b5-ef3b-4932-957c-a4b44f35c1a9","Type":"ContainerStarted","Data":"f778fb88b8d7a66cb9e757f2a190b1f5ae397e2a3a2ef084d646a6696e5f99ae"} Feb 17 14:21:05 crc kubenswrapper[4836]: I0217 14:21:05.185122 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" podStartSLOduration=2.028844558 podStartE2EDuration="4.185100335s" podCreationTimestamp="2026-02-17 14:21:01 +0000 UTC" firstStartedPulling="2026-02-17 14:21:02.442482138 +0000 UTC m=+888.785410407" lastFinishedPulling="2026-02-17 14:21:04.598737915 +0000 UTC m=+890.941666184" observedRunningTime="2026-02-17 14:21:05.184350705 +0000 UTC m=+891.527278984" watchObservedRunningTime="2026-02-17 14:21:05.185100335 +0000 UTC m=+891.528028604" Feb 17 14:21:05 crc kubenswrapper[4836]: I0217 14:21:05.190187 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-w8wbg" podStartSLOduration=1.285013067 podStartE2EDuration="4.190162886s" podCreationTimestamp="2026-02-17 14:21:01 +0000 UTC" firstStartedPulling="2026-02-17 14:21:01.695062207 +0000 UTC m=+888.037990476" lastFinishedPulling="2026-02-17 14:21:04.600212026 +0000 UTC m=+890.943140295" observedRunningTime="2026-02-17 14:21:05.169695578 +0000 UTC m=+891.512623847" watchObservedRunningTime="2026-02-17 14:21:05.190162886 +0000 UTC m=+891.533091155" Feb 17 14:21:08 crc kubenswrapper[4836]: I0217 14:21:08.186447 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" event={"ID":"0d0615b5-ef3b-4932-957c-a4b44f35c1a9","Type":"ContainerStarted","Data":"9aadb1da88a882b1e411d7b4e93a538f345fea0b1d3d9c1af8adb50d6fff8506"} Feb 17 14:21:08 crc kubenswrapper[4836]: I0217 14:21:08.188865 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" event={"ID":"8fc6d41c-a8a1-4fe3-ade2-b79761920b17","Type":"ContainerStarted","Data":"50922edbb640e19c9d8a35cfe5d477f250d234d0781f4ab9c50277718f237ba4"} Feb 17 14:21:08 crc kubenswrapper[4836]: I0217 14:21:08.207857 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-877xf" podStartSLOduration=1.761940369 podStartE2EDuration="7.20783998s" podCreationTimestamp="2026-02-17 14:21:01 +0000 UTC" firstStartedPulling="2026-02-17 14:21:01.925137215 +0000 UTC m=+888.268065484" lastFinishedPulling="2026-02-17 14:21:07.371036826 +0000 UTC m=+893.713965095" observedRunningTime="2026-02-17 14:21:08.204524678 +0000 UTC m=+894.547452977" watchObservedRunningTime="2026-02-17 14:21:08.20783998 +0000 UTC m=+894.550768249" Feb 17 14:21:08 crc kubenswrapper[4836]: I0217 14:21:08.237797 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-q985f" podStartSLOduration=1.970026135 podStartE2EDuration="7.23777458s" podCreationTimestamp="2026-02-17 14:21:01 +0000 UTC" firstStartedPulling="2026-02-17 14:21:02.091970837 +0000 UTC m=+888.434899106" lastFinishedPulling="2026-02-17 14:21:07.359719282 +0000 UTC m=+893.702647551" observedRunningTime="2026-02-17 14:21:08.231598889 +0000 UTC m=+894.574527168" watchObservedRunningTime="2026-02-17 14:21:08.23777458 +0000 UTC m=+894.580702849" Feb 17 14:21:11 crc kubenswrapper[4836]: I0217 14:21:11.663108 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-w8wbg" Feb 17 14:21:12 crc kubenswrapper[4836]: I0217 14:21:12.110550 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:12 crc kubenswrapper[4836]: I0217 14:21:12.110872 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:12 crc kubenswrapper[4836]: I0217 14:21:12.118636 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:12 crc kubenswrapper[4836]: I0217 14:21:12.218115 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b844687d4-4gf5j" Feb 17 14:21:12 crc kubenswrapper[4836]: I0217 14:21:12.274591 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6zspj"] Feb 17 14:21:22 crc kubenswrapper[4836]: I0217 14:21:22.174641 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-52vj8" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.113589 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl"] Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.115618 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.124250 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.124588 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl"] Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.197513 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.197602 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.197662 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hck2p\" (UniqueName: \"kubernetes.io/projected/5939eb42-42be-4ecf-845a-c28b4669c02d-kube-api-access-hck2p\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.298988 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.299121 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hck2p\" (UniqueName: \"kubernetes.io/projected/5939eb42-42be-4ecf-845a-c28b4669c02d-kube-api-access-hck2p\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.299205 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.299823 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.300145 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.324893 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hck2p\" (UniqueName: \"kubernetes.io/projected/5939eb42-42be-4ecf-845a-c28b4669c02d-kube-api-access-hck2p\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.438927 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:36 crc kubenswrapper[4836]: I0217 14:21:36.863829 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl"] Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.317779 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6zspj" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" containerName="console" containerID="cri-o://f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98" gracePeriod=15 Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.398764 4836 generic.go:334] "Generic (PLEG): container finished" podID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerID="36109a71edda8ae9aa419b8559cf5fe2431d0d712a414f525d482f59972b80ca" exitCode=0 Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.398831 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" event={"ID":"5939eb42-42be-4ecf-845a-c28b4669c02d","Type":"ContainerDied","Data":"36109a71edda8ae9aa419b8559cf5fe2431d0d712a414f525d482f59972b80ca"} Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.398860 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" event={"ID":"5939eb42-42be-4ecf-845a-c28b4669c02d","Type":"ContainerStarted","Data":"bfa7ffee62c7db8c9b964e7f91cb1a47a56e8fd7b3d25a5ed2ab5b4a481604e2"} Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.719465 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6zspj_6d52104b-91e7-4a3a-9138-163eb850485d/console/0.log" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.719792 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.824378 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-trusted-ca-bundle\") pod \"6d52104b-91e7-4a3a-9138-163eb850485d\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.824483 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-oauth-config\") pod \"6d52104b-91e7-4a3a-9138-163eb850485d\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.824521 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-console-config\") pod \"6d52104b-91e7-4a3a-9138-163eb850485d\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.824595 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-oauth-serving-cert\") pod \"6d52104b-91e7-4a3a-9138-163eb850485d\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.824618 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-service-ca\") pod \"6d52104b-91e7-4a3a-9138-163eb850485d\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.824651 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-serving-cert\") pod \"6d52104b-91e7-4a3a-9138-163eb850485d\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.824719 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grvpk\" (UniqueName: \"kubernetes.io/projected/6d52104b-91e7-4a3a-9138-163eb850485d-kube-api-access-grvpk\") pod \"6d52104b-91e7-4a3a-9138-163eb850485d\" (UID: \"6d52104b-91e7-4a3a-9138-163eb850485d\") " Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.825407 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6d52104b-91e7-4a3a-9138-163eb850485d" (UID: "6d52104b-91e7-4a3a-9138-163eb850485d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.825668 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6d52104b-91e7-4a3a-9138-163eb850485d" (UID: "6d52104b-91e7-4a3a-9138-163eb850485d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.825798 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-service-ca" (OuterVolumeSpecName: "service-ca") pod "6d52104b-91e7-4a3a-9138-163eb850485d" (UID: "6d52104b-91e7-4a3a-9138-163eb850485d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.826227 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-console-config" (OuterVolumeSpecName: "console-config") pod "6d52104b-91e7-4a3a-9138-163eb850485d" (UID: "6d52104b-91e7-4a3a-9138-163eb850485d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.826259 4836 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.826318 4836 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.826331 4836 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.832014 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6d52104b-91e7-4a3a-9138-163eb850485d" (UID: "6d52104b-91e7-4a3a-9138-163eb850485d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.832540 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d52104b-91e7-4a3a-9138-163eb850485d-kube-api-access-grvpk" (OuterVolumeSpecName: "kube-api-access-grvpk") pod "6d52104b-91e7-4a3a-9138-163eb850485d" (UID: "6d52104b-91e7-4a3a-9138-163eb850485d"). InnerVolumeSpecName "kube-api-access-grvpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.835213 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6d52104b-91e7-4a3a-9138-163eb850485d" (UID: "6d52104b-91e7-4a3a-9138-163eb850485d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.927849 4836 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.927945 4836 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d52104b-91e7-4a3a-9138-163eb850485d-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.927960 4836 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d52104b-91e7-4a3a-9138-163eb850485d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:37 crc kubenswrapper[4836]: I0217 14:21:37.927972 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grvpk\" (UniqueName: \"kubernetes.io/projected/6d52104b-91e7-4a3a-9138-163eb850485d-kube-api-access-grvpk\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.452936 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6zspj_6d52104b-91e7-4a3a-9138-163eb850485d/console/0.log" Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.452991 4836 generic.go:334] "Generic (PLEG): container finished" podID="6d52104b-91e7-4a3a-9138-163eb850485d" containerID="f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98" exitCode=2 Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.453025 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6zspj" event={"ID":"6d52104b-91e7-4a3a-9138-163eb850485d","Type":"ContainerDied","Data":"f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98"} Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.453054 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6zspj" event={"ID":"6d52104b-91e7-4a3a-9138-163eb850485d","Type":"ContainerDied","Data":"291ff510753e6307affd77e72c2b113e622f07b799c9441e606ef5eb3889b1a8"} Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.453072 4836 scope.go:117] "RemoveContainer" containerID="f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98" Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.453222 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6zspj" Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.503387 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6zspj"] Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.508532 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6zspj"] Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.585141 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" path="/var/lib/kubelet/pods/6d52104b-91e7-4a3a-9138-163eb850485d/volumes" Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.622683 4836 scope.go:117] "RemoveContainer" containerID="f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98" Feb 17 14:21:38 crc kubenswrapper[4836]: E0217 14:21:38.623437 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98\": container with ID starting with f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98 not found: ID does not exist" containerID="f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98" Feb 17 14:21:38 crc kubenswrapper[4836]: I0217 14:21:38.623474 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98"} err="failed to get container status \"f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98\": rpc error: code = NotFound desc = could not find container \"f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98\": container with ID starting with f775333908715051cce1f22baadc49c1832333ebb3ff40de54bd938fa8c37a98 not found: ID does not exist" Feb 17 14:21:38 crc kubenswrapper[4836]: E0217 14:21:38.938330 4836 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5939eb42_42be_4ecf_845a_c28b4669c02d.slice/crio-6ad7243e7b72de4f694d75068780dc31771b20f94dd9e2ee564008d8fbfeb3ca.scope\": RecentStats: unable to find data in memory cache]" Feb 17 14:21:39 crc kubenswrapper[4836]: I0217 14:21:39.464755 4836 generic.go:334] "Generic (PLEG): container finished" podID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerID="6ad7243e7b72de4f694d75068780dc31771b20f94dd9e2ee564008d8fbfeb3ca" exitCode=0 Feb 17 14:21:39 crc kubenswrapper[4836]: I0217 14:21:39.464811 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" event={"ID":"5939eb42-42be-4ecf-845a-c28b4669c02d","Type":"ContainerDied","Data":"6ad7243e7b72de4f694d75068780dc31771b20f94dd9e2ee564008d8fbfeb3ca"} Feb 17 14:21:40 crc kubenswrapper[4836]: I0217 14:21:40.474027 4836 generic.go:334] "Generic (PLEG): container finished" podID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerID="8027455750d65578381ecdd1bb12d3bb3c1d46d569bb8e1b1c71989150c32938" exitCode=0 Feb 17 14:21:40 crc kubenswrapper[4836]: I0217 14:21:40.474152 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" event={"ID":"5939eb42-42be-4ecf-845a-c28b4669c02d","Type":"ContainerDied","Data":"8027455750d65578381ecdd1bb12d3bb3c1d46d569bb8e1b1c71989150c32938"} Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.742334 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.783988 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-util\") pod \"5939eb42-42be-4ecf-845a-c28b4669c02d\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.784229 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-bundle\") pod \"5939eb42-42be-4ecf-845a-c28b4669c02d\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.784269 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hck2p\" (UniqueName: \"kubernetes.io/projected/5939eb42-42be-4ecf-845a-c28b4669c02d-kube-api-access-hck2p\") pod \"5939eb42-42be-4ecf-845a-c28b4669c02d\" (UID: \"5939eb42-42be-4ecf-845a-c28b4669c02d\") " Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.785343 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-bundle" (OuterVolumeSpecName: "bundle") pod "5939eb42-42be-4ecf-845a-c28b4669c02d" (UID: "5939eb42-42be-4ecf-845a-c28b4669c02d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.792013 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5939eb42-42be-4ecf-845a-c28b4669c02d-kube-api-access-hck2p" (OuterVolumeSpecName: "kube-api-access-hck2p") pod "5939eb42-42be-4ecf-845a-c28b4669c02d" (UID: "5939eb42-42be-4ecf-845a-c28b4669c02d"). InnerVolumeSpecName "kube-api-access-hck2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.800812 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-util" (OuterVolumeSpecName: "util") pod "5939eb42-42be-4ecf-845a-c28b4669c02d" (UID: "5939eb42-42be-4ecf-845a-c28b4669c02d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.885825 4836 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.885885 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hck2p\" (UniqueName: \"kubernetes.io/projected/5939eb42-42be-4ecf-845a-c28b4669c02d-kube-api-access-hck2p\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:41 crc kubenswrapper[4836]: I0217 14:21:41.885906 4836 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5939eb42-42be-4ecf-845a-c28b4669c02d-util\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:42 crc kubenswrapper[4836]: I0217 14:21:42.491962 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" event={"ID":"5939eb42-42be-4ecf-845a-c28b4669c02d","Type":"ContainerDied","Data":"bfa7ffee62c7db8c9b964e7f91cb1a47a56e8fd7b3d25a5ed2ab5b4a481604e2"} Feb 17 14:21:42 crc kubenswrapper[4836]: I0217 14:21:42.491998 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl" Feb 17 14:21:42 crc kubenswrapper[4836]: I0217 14:21:42.492001 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa7ffee62c7db8c9b964e7f91cb1a47a56e8fd7b3d25a5ed2ab5b4a481604e2" Feb 17 14:21:43 crc kubenswrapper[4836]: I0217 14:21:43.962220 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9lh8"] Feb 17 14:21:43 crc kubenswrapper[4836]: E0217 14:21:43.962530 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerName="pull" Feb 17 14:21:43 crc kubenswrapper[4836]: I0217 14:21:43.962545 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerName="pull" Feb 17 14:21:43 crc kubenswrapper[4836]: E0217 14:21:43.962560 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerName="util" Feb 17 14:21:43 crc kubenswrapper[4836]: I0217 14:21:43.962565 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerName="util" Feb 17 14:21:43 crc kubenswrapper[4836]: E0217 14:21:43.962578 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" containerName="console" Feb 17 14:21:43 crc kubenswrapper[4836]: I0217 14:21:43.962584 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" containerName="console" Feb 17 14:21:43 crc kubenswrapper[4836]: E0217 14:21:43.962596 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerName="extract" Feb 17 14:21:43 crc kubenswrapper[4836]: I0217 14:21:43.962602 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerName="extract" Feb 17 14:21:43 crc kubenswrapper[4836]: I0217 14:21:43.962697 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d52104b-91e7-4a3a-9138-163eb850485d" containerName="console" Feb 17 14:21:43 crc kubenswrapper[4836]: I0217 14:21:43.962716 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="5939eb42-42be-4ecf-845a-c28b4669c02d" containerName="extract" Feb 17 14:21:43 crc kubenswrapper[4836]: I0217 14:21:43.963542 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.034054 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9lh8"] Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.146553 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-catalog-content\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.148511 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-utilities\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.148574 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76l57\" (UniqueName: \"kubernetes.io/projected/b11bad93-5af0-4c75-954c-42cc99684597-kube-api-access-76l57\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.250460 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-catalog-content\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.250587 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-utilities\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.250648 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76l57\" (UniqueName: \"kubernetes.io/projected/b11bad93-5af0-4c75-954c-42cc99684597-kube-api-access-76l57\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.251842 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-catalog-content\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.252523 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-utilities\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.272416 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76l57\" (UniqueName: \"kubernetes.io/projected/b11bad93-5af0-4c75-954c-42cc99684597-kube-api-access-76l57\") pod \"certified-operators-p9lh8\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.342707 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:44 crc kubenswrapper[4836]: I0217 14:21:44.860343 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9lh8"] Feb 17 14:21:45 crc kubenswrapper[4836]: I0217 14:21:45.517226 4836 generic.go:334] "Generic (PLEG): container finished" podID="b11bad93-5af0-4c75-954c-42cc99684597" containerID="240ced32e0bb68659d8fa3215c9fcef735236350bf5d87b16d4adcec08100306" exitCode=0 Feb 17 14:21:45 crc kubenswrapper[4836]: I0217 14:21:45.517328 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9lh8" event={"ID":"b11bad93-5af0-4c75-954c-42cc99684597","Type":"ContainerDied","Data":"240ced32e0bb68659d8fa3215c9fcef735236350bf5d87b16d4adcec08100306"} Feb 17 14:21:45 crc kubenswrapper[4836]: I0217 14:21:45.517420 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9lh8" event={"ID":"b11bad93-5af0-4c75-954c-42cc99684597","Type":"ContainerStarted","Data":"f9b851496d5cce0303b165e18e03386edf9e343272b9928f99078559a7e8d5a0"} Feb 17 14:21:46 crc kubenswrapper[4836]: I0217 14:21:46.525400 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9lh8" event={"ID":"b11bad93-5af0-4c75-954c-42cc99684597","Type":"ContainerStarted","Data":"4fe1fe066d30b34ae39334b2a5ab55fea3ebf731c92645405e6f3bbb74be985c"} Feb 17 14:21:47 crc kubenswrapper[4836]: I0217 14:21:47.534807 4836 generic.go:334] "Generic (PLEG): container finished" podID="b11bad93-5af0-4c75-954c-42cc99684597" containerID="4fe1fe066d30b34ae39334b2a5ab55fea3ebf731c92645405e6f3bbb74be985c" exitCode=0 Feb 17 14:21:47 crc kubenswrapper[4836]: I0217 14:21:47.534895 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9lh8" event={"ID":"b11bad93-5af0-4c75-954c-42cc99684597","Type":"ContainerDied","Data":"4fe1fe066d30b34ae39334b2a5ab55fea3ebf731c92645405e6f3bbb74be985c"} Feb 17 14:21:48 crc kubenswrapper[4836]: I0217 14:21:48.543858 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9lh8" event={"ID":"b11bad93-5af0-4c75-954c-42cc99684597","Type":"ContainerStarted","Data":"23e3b31d1bf10a2dce07355e7766faafe2d1bf59f1230f2ba36f46b169423e26"} Feb 17 14:21:48 crc kubenswrapper[4836]: I0217 14:21:48.564847 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9lh8" podStartSLOduration=3.138232373 podStartE2EDuration="5.564816245s" podCreationTimestamp="2026-02-17 14:21:43 +0000 UTC" firstStartedPulling="2026-02-17 14:21:45.519438341 +0000 UTC m=+931.862366610" lastFinishedPulling="2026-02-17 14:21:47.946022213 +0000 UTC m=+934.288950482" observedRunningTime="2026-02-17 14:21:48.561982748 +0000 UTC m=+934.904911037" watchObservedRunningTime="2026-02-17 14:21:48.564816245 +0000 UTC m=+934.907744534" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.425039 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt"] Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.426361 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.428417 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.428478 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.428685 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-vgpnt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.428885 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.429283 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.448428 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt"] Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.565798 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-apiservice-cert\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.565866 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8dh\" (UniqueName: \"kubernetes.io/projected/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-kube-api-access-kw8dh\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.565894 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-webhook-cert\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.666777 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-apiservice-cert\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.666859 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8dh\" (UniqueName: \"kubernetes.io/projected/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-kube-api-access-kw8dh\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.666884 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-webhook-cert\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.675883 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-webhook-cert\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.697801 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-apiservice-cert\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.699213 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8dh\" (UniqueName: \"kubernetes.io/projected/ccb35f40-d0b8-4a1e-8c45-63dd6987b72c-kube-api-access-kw8dh\") pod \"metallb-operator-controller-manager-69b9cbf5df-6fkqt\" (UID: \"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c\") " pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.742482 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.938769 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx"] Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.940899 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.944309 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7nbsm" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.946620 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.948323 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.961958 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx"] Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.975799 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gx9\" (UniqueName: \"kubernetes.io/projected/16c736d5-389e-4d03-9657-1abcd4448953-kube-api-access-w6gx9\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.975864 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16c736d5-389e-4d03-9657-1abcd4448953-apiservice-cert\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:53 crc kubenswrapper[4836]: I0217 14:21:53.975942 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16c736d5-389e-4d03-9657-1abcd4448953-webhook-cert\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.079057 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16c736d5-389e-4d03-9657-1abcd4448953-webhook-cert\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.079149 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6gx9\" (UniqueName: \"kubernetes.io/projected/16c736d5-389e-4d03-9657-1abcd4448953-kube-api-access-w6gx9\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.079182 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16c736d5-389e-4d03-9657-1abcd4448953-apiservice-cert\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.085009 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16c736d5-389e-4d03-9657-1abcd4448953-webhook-cert\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.086043 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16c736d5-389e-4d03-9657-1abcd4448953-apiservice-cert\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.123601 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6gx9\" (UniqueName: \"kubernetes.io/projected/16c736d5-389e-4d03-9657-1abcd4448953-kube-api-access-w6gx9\") pod \"metallb-operator-webhook-server-856546fc87-n5vrx\" (UID: \"16c736d5-389e-4d03-9657-1abcd4448953\") " pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.270738 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.282813 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt"] Feb 17 14:21:54 crc kubenswrapper[4836]: W0217 14:21:54.294958 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccb35f40_d0b8_4a1e_8c45_63dd6987b72c.slice/crio-fec3dfd83907dceea71791e3f39ff9f7358e04f0d059a73af848f044d7987788 WatchSource:0}: Error finding container fec3dfd83907dceea71791e3f39ff9f7358e04f0d059a73af848f044d7987788: Status 404 returned error can't find the container with id fec3dfd83907dceea71791e3f39ff9f7358e04f0d059a73af848f044d7987788 Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.343238 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.343811 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.449054 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.533975 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx"] Feb 17 14:21:54 crc kubenswrapper[4836]: W0217 14:21:54.539839 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c736d5_389e_4d03_9657_1abcd4448953.slice/crio-c936d1fb7a39e85562851c2f13505b9eddab11d6528d5d2a322e4ba467ba7694 WatchSource:0}: Error finding container c936d1fb7a39e85562851c2f13505b9eddab11d6528d5d2a322e4ba467ba7694: Status 404 returned error can't find the container with id c936d1fb7a39e85562851c2f13505b9eddab11d6528d5d2a322e4ba467ba7694 Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.587594 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" event={"ID":"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c","Type":"ContainerStarted","Data":"fec3dfd83907dceea71791e3f39ff9f7358e04f0d059a73af848f044d7987788"} Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.588730 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" event={"ID":"16c736d5-389e-4d03-9657-1abcd4448953","Type":"ContainerStarted","Data":"c936d1fb7a39e85562851c2f13505b9eddab11d6528d5d2a322e4ba467ba7694"} Feb 17 14:21:54 crc kubenswrapper[4836]: I0217 14:21:54.630422 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:21:56 crc kubenswrapper[4836]: I0217 14:21:56.755982 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9lh8"] Feb 17 14:21:57 crc kubenswrapper[4836]: I0217 14:21:57.611927 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p9lh8" podUID="b11bad93-5af0-4c75-954c-42cc99684597" containerName="registry-server" containerID="cri-o://23e3b31d1bf10a2dce07355e7766faafe2d1bf59f1230f2ba36f46b169423e26" gracePeriod=2 Feb 17 14:21:58 crc kubenswrapper[4836]: I0217 14:21:58.629338 4836 generic.go:334] "Generic (PLEG): container finished" podID="b11bad93-5af0-4c75-954c-42cc99684597" containerID="23e3b31d1bf10a2dce07355e7766faafe2d1bf59f1230f2ba36f46b169423e26" exitCode=0 Feb 17 14:21:58 crc kubenswrapper[4836]: I0217 14:21:58.629398 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9lh8" event={"ID":"b11bad93-5af0-4c75-954c-42cc99684597","Type":"ContainerDied","Data":"23e3b31d1bf10a2dce07355e7766faafe2d1bf59f1230f2ba36f46b169423e26"} Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.127613 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.308891 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-catalog-content\") pod \"b11bad93-5af0-4c75-954c-42cc99684597\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.308975 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76l57\" (UniqueName: \"kubernetes.io/projected/b11bad93-5af0-4c75-954c-42cc99684597-kube-api-access-76l57\") pod \"b11bad93-5af0-4c75-954c-42cc99684597\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.311035 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-utilities\") pod \"b11bad93-5af0-4c75-954c-42cc99684597\" (UID: \"b11bad93-5af0-4c75-954c-42cc99684597\") " Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.312912 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-utilities" (OuterVolumeSpecName: "utilities") pod "b11bad93-5af0-4c75-954c-42cc99684597" (UID: "b11bad93-5af0-4c75-954c-42cc99684597"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.318258 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11bad93-5af0-4c75-954c-42cc99684597-kube-api-access-76l57" (OuterVolumeSpecName: "kube-api-access-76l57") pod "b11bad93-5af0-4c75-954c-42cc99684597" (UID: "b11bad93-5af0-4c75-954c-42cc99684597"). InnerVolumeSpecName "kube-api-access-76l57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.369243 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11bad93-5af0-4c75-954c-42cc99684597" (UID: "b11bad93-5af0-4c75-954c-42cc99684597"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.412959 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.413228 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11bad93-5af0-4c75-954c-42cc99684597-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.413731 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76l57\" (UniqueName: \"kubernetes.io/projected/b11bad93-5af0-4c75-954c-42cc99684597-kube-api-access-76l57\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.642565 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" event={"ID":"ccb35f40-d0b8-4a1e-8c45-63dd6987b72c","Type":"ContainerStarted","Data":"a07deea8e75a533163f5f5f6a1fc785de1ad00b58a99eccf4b41d397fabad11c"} Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.642691 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.645533 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9lh8" event={"ID":"b11bad93-5af0-4c75-954c-42cc99684597","Type":"ContainerDied","Data":"f9b851496d5cce0303b165e18e03386edf9e343272b9928f99078559a7e8d5a0"} Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.645930 4836 scope.go:117] "RemoveContainer" containerID="23e3b31d1bf10a2dce07355e7766faafe2d1bf59f1230f2ba36f46b169423e26" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.645588 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9lh8" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.647875 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" event={"ID":"16c736d5-389e-4d03-9657-1abcd4448953","Type":"ContainerStarted","Data":"47bb71e6c27b2365020b83b9886828ebf9da6c15c21da87051b1933fdd3210e0"} Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.648401 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.678979 4836 scope.go:117] "RemoveContainer" containerID="4fe1fe066d30b34ae39334b2a5ab55fea3ebf731c92645405e6f3bbb74be985c" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.682554 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" podStartSLOduration=1.891030851 podStartE2EDuration="7.68254315s" podCreationTimestamp="2026-02-17 14:21:53 +0000 UTC" firstStartedPulling="2026-02-17 14:21:54.300672071 +0000 UTC m=+940.643600340" lastFinishedPulling="2026-02-17 14:22:00.09218437 +0000 UTC m=+946.435112639" observedRunningTime="2026-02-17 14:22:00.680907716 +0000 UTC m=+947.023835985" watchObservedRunningTime="2026-02-17 14:22:00.68254315 +0000 UTC m=+947.025471419" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.710325 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9lh8"] Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.713648 4836 scope.go:117] "RemoveContainer" containerID="240ced32e0bb68659d8fa3215c9fcef735236350bf5d87b16d4adcec08100306" Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.715864 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p9lh8"] Feb 17 14:22:00 crc kubenswrapper[4836]: I0217 14:22:00.740635 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" podStartSLOduration=2.118125303 podStartE2EDuration="7.740610969s" podCreationTimestamp="2026-02-17 14:21:53 +0000 UTC" firstStartedPulling="2026-02-17 14:21:54.542630586 +0000 UTC m=+940.885558855" lastFinishedPulling="2026-02-17 14:22:00.165116262 +0000 UTC m=+946.508044521" observedRunningTime="2026-02-17 14:22:00.734828801 +0000 UTC m=+947.077757080" watchObservedRunningTime="2026-02-17 14:22:00.740610969 +0000 UTC m=+947.083539238" Feb 17 14:22:02 crc kubenswrapper[4836]: I0217 14:22:02.575216 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11bad93-5af0-4c75-954c-42cc99684597" path="/var/lib/kubelet/pods/b11bad93-5af0-4c75-954c-42cc99684597/volumes" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.785011 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9vmx8"] Feb 17 14:22:09 crc kubenswrapper[4836]: E0217 14:22:09.786066 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11bad93-5af0-4c75-954c-42cc99684597" containerName="extract-utilities" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.786083 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11bad93-5af0-4c75-954c-42cc99684597" containerName="extract-utilities" Feb 17 14:22:09 crc kubenswrapper[4836]: E0217 14:22:09.786095 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11bad93-5af0-4c75-954c-42cc99684597" containerName="extract-content" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.786102 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11bad93-5af0-4c75-954c-42cc99684597" containerName="extract-content" Feb 17 14:22:09 crc kubenswrapper[4836]: E0217 14:22:09.786120 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11bad93-5af0-4c75-954c-42cc99684597" containerName="registry-server" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.786129 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11bad93-5af0-4c75-954c-42cc99684597" containerName="registry-server" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.786571 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11bad93-5af0-4c75-954c-42cc99684597" containerName="registry-server" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.787546 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.802370 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vmx8"] Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.850288 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-utilities\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.850668 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-catalog-content\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.850777 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtxkb\" (UniqueName: \"kubernetes.io/projected/989e8ec6-9217-43f4-969a-07d9cb793ca9-kube-api-access-gtxkb\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.951586 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-utilities\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.951977 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-catalog-content\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.952103 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtxkb\" (UniqueName: \"kubernetes.io/projected/989e8ec6-9217-43f4-969a-07d9cb793ca9-kube-api-access-gtxkb\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.952203 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-utilities\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.952603 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-catalog-content\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:09 crc kubenswrapper[4836]: I0217 14:22:09.974534 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtxkb\" (UniqueName: \"kubernetes.io/projected/989e8ec6-9217-43f4-969a-07d9cb793ca9-kube-api-access-gtxkb\") pod \"redhat-marketplace-9vmx8\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:10 crc kubenswrapper[4836]: I0217 14:22:10.108661 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:10 crc kubenswrapper[4836]: I0217 14:22:10.587543 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vmx8"] Feb 17 14:22:10 crc kubenswrapper[4836]: I0217 14:22:10.722775 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vmx8" event={"ID":"989e8ec6-9217-43f4-969a-07d9cb793ca9","Type":"ContainerStarted","Data":"8fe721131e87d39ebd81dc2d83e5b54d94718d9776edb118ec98f74e1234a082"} Feb 17 14:22:11 crc kubenswrapper[4836]: I0217 14:22:11.731492 4836 generic.go:334] "Generic (PLEG): container finished" podID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerID="82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282" exitCode=0 Feb 17 14:22:11 crc kubenswrapper[4836]: I0217 14:22:11.731689 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vmx8" event={"ID":"989e8ec6-9217-43f4-969a-07d9cb793ca9","Type":"ContainerDied","Data":"82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282"} Feb 17 14:22:12 crc kubenswrapper[4836]: I0217 14:22:12.747317 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vmx8" event={"ID":"989e8ec6-9217-43f4-969a-07d9cb793ca9","Type":"ContainerStarted","Data":"95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a"} Feb 17 14:22:13 crc kubenswrapper[4836]: I0217 14:22:13.758444 4836 generic.go:334] "Generic (PLEG): container finished" podID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerID="95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a" exitCode=0 Feb 17 14:22:13 crc kubenswrapper[4836]: I0217 14:22:13.758582 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vmx8" event={"ID":"989e8ec6-9217-43f4-969a-07d9cb793ca9","Type":"ContainerDied","Data":"95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a"} Feb 17 14:22:14 crc kubenswrapper[4836]: I0217 14:22:14.283600 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-856546fc87-n5vrx" Feb 17 14:22:14 crc kubenswrapper[4836]: I0217 14:22:14.780365 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vmx8" event={"ID":"989e8ec6-9217-43f4-969a-07d9cb793ca9","Type":"ContainerStarted","Data":"59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59"} Feb 17 14:22:14 crc kubenswrapper[4836]: I0217 14:22:14.806113 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9vmx8" podStartSLOduration=3.356018594 podStartE2EDuration="5.806097644s" podCreationTimestamp="2026-02-17 14:22:09 +0000 UTC" firstStartedPulling="2026-02-17 14:22:11.734026585 +0000 UTC m=+958.076954854" lastFinishedPulling="2026-02-17 14:22:14.184105635 +0000 UTC m=+960.527033904" observedRunningTime="2026-02-17 14:22:14.804803549 +0000 UTC m=+961.147731828" watchObservedRunningTime="2026-02-17 14:22:14.806097644 +0000 UTC m=+961.149025913" Feb 17 14:22:20 crc kubenswrapper[4836]: I0217 14:22:20.109650 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:20 crc kubenswrapper[4836]: I0217 14:22:20.110386 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:20 crc kubenswrapper[4836]: I0217 14:22:20.156845 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:21 crc kubenswrapper[4836]: I0217 14:22:21.109611 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:21 crc kubenswrapper[4836]: I0217 14:22:21.156649 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vmx8"] Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.073261 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9vmx8" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerName="registry-server" containerID="cri-o://59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59" gracePeriod=2 Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.452872 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.543948 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-catalog-content\") pod \"989e8ec6-9217-43f4-969a-07d9cb793ca9\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.544030 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtxkb\" (UniqueName: \"kubernetes.io/projected/989e8ec6-9217-43f4-969a-07d9cb793ca9-kube-api-access-gtxkb\") pod \"989e8ec6-9217-43f4-969a-07d9cb793ca9\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.544081 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-utilities\") pod \"989e8ec6-9217-43f4-969a-07d9cb793ca9\" (UID: \"989e8ec6-9217-43f4-969a-07d9cb793ca9\") " Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.545521 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-utilities" (OuterVolumeSpecName: "utilities") pod "989e8ec6-9217-43f4-969a-07d9cb793ca9" (UID: "989e8ec6-9217-43f4-969a-07d9cb793ca9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.551529 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989e8ec6-9217-43f4-969a-07d9cb793ca9-kube-api-access-gtxkb" (OuterVolumeSpecName: "kube-api-access-gtxkb") pod "989e8ec6-9217-43f4-969a-07d9cb793ca9" (UID: "989e8ec6-9217-43f4-969a-07d9cb793ca9"). InnerVolumeSpecName "kube-api-access-gtxkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.645496 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.645532 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtxkb\" (UniqueName: \"kubernetes.io/projected/989e8ec6-9217-43f4-969a-07d9cb793ca9-kube-api-access-gtxkb\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.730751 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "989e8ec6-9217-43f4-969a-07d9cb793ca9" (UID: "989e8ec6-9217-43f4-969a-07d9cb793ca9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:22:23 crc kubenswrapper[4836]: I0217 14:22:23.747314 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989e8ec6-9217-43f4-969a-07d9cb793ca9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.081200 4836 generic.go:334] "Generic (PLEG): container finished" podID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerID="59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59" exitCode=0 Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.081283 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vmx8" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.081318 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vmx8" event={"ID":"989e8ec6-9217-43f4-969a-07d9cb793ca9","Type":"ContainerDied","Data":"59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59"} Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.082338 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vmx8" event={"ID":"989e8ec6-9217-43f4-969a-07d9cb793ca9","Type":"ContainerDied","Data":"8fe721131e87d39ebd81dc2d83e5b54d94718d9776edb118ec98f74e1234a082"} Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.082366 4836 scope.go:117] "RemoveContainer" containerID="59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.107163 4836 scope.go:117] "RemoveContainer" containerID="95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.120088 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vmx8"] Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.139385 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vmx8"] Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.152727 4836 scope.go:117] "RemoveContainer" containerID="82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.171811 4836 scope.go:117] "RemoveContainer" containerID="59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59" Feb 17 14:22:24 crc kubenswrapper[4836]: E0217 14:22:24.172661 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59\": container with ID starting with 59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59 not found: ID does not exist" containerID="59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.172706 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59"} err="failed to get container status \"59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59\": rpc error: code = NotFound desc = could not find container \"59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59\": container with ID starting with 59ac6df301e0f88ce0c625edb0b19c60eb087a56643b9f74e869499526bb8a59 not found: ID does not exist" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.172732 4836 scope.go:117] "RemoveContainer" containerID="95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a" Feb 17 14:22:24 crc kubenswrapper[4836]: E0217 14:22:24.172979 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a\": container with ID starting with 95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a not found: ID does not exist" containerID="95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.173010 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a"} err="failed to get container status \"95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a\": rpc error: code = NotFound desc = could not find container \"95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a\": container with ID starting with 95f148ce857c93eefbaf1d9b8240286284786811ddbc3cdd133a6fa5f0568d0a not found: ID does not exist" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.173028 4836 scope.go:117] "RemoveContainer" containerID="82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282" Feb 17 14:22:24 crc kubenswrapper[4836]: E0217 14:22:24.173530 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282\": container with ID starting with 82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282 not found: ID does not exist" containerID="82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.173554 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282"} err="failed to get container status \"82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282\": rpc error: code = NotFound desc = could not find container \"82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282\": container with ID starting with 82e3c1617eac7ebac1ee94ffed06e82d3836f7d995455d1ac8436067da32e282 not found: ID does not exist" Feb 17 14:22:24 crc kubenswrapper[4836]: I0217 14:22:24.575215 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" path="/var/lib/kubelet/pods/989e8ec6-9217-43f4-969a-07d9cb793ca9/volumes" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.166766 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vxxx8"] Feb 17 14:22:26 crc kubenswrapper[4836]: E0217 14:22:26.167056 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerName="registry-server" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.167073 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerName="registry-server" Feb 17 14:22:26 crc kubenswrapper[4836]: E0217 14:22:26.167086 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerName="extract-content" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.167094 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerName="extract-content" Feb 17 14:22:26 crc kubenswrapper[4836]: E0217 14:22:26.167114 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerName="extract-utilities" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.167122 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerName="extract-utilities" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.167262 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="989e8ec6-9217-43f4-969a-07d9cb793ca9" containerName="registry-server" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.168279 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.176311 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxxx8"] Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.282903 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4jss\" (UniqueName: \"kubernetes.io/projected/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-kube-api-access-q4jss\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.283024 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-utilities\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.283068 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-catalog-content\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.426804 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4jss\" (UniqueName: \"kubernetes.io/projected/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-kube-api-access-q4jss\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.426871 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-utilities\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.426899 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-catalog-content\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.427628 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-catalog-content\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.427979 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-utilities\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.453153 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4jss\" (UniqueName: \"kubernetes.io/projected/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-kube-api-access-q4jss\") pod \"community-operators-vxxx8\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.534490 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:26 crc kubenswrapper[4836]: I0217 14:22:26.773706 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxxx8"] Feb 17 14:22:27 crc kubenswrapper[4836]: I0217 14:22:27.106478 4836 generic.go:334] "Generic (PLEG): container finished" podID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerID="87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03" exitCode=0 Feb 17 14:22:27 crc kubenswrapper[4836]: I0217 14:22:27.106599 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxxx8" event={"ID":"ee0bd3ed-4af9-40a3-9742-ee548934f0c7","Type":"ContainerDied","Data":"87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03"} Feb 17 14:22:27 crc kubenswrapper[4836]: I0217 14:22:27.106935 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxxx8" event={"ID":"ee0bd3ed-4af9-40a3-9742-ee548934f0c7","Type":"ContainerStarted","Data":"cb2c2b42c4e66e02d5c1a234e004888a9a328e8e2b0673f2fef499c320e33d68"} Feb 17 14:22:28 crc kubenswrapper[4836]: I0217 14:22:28.116012 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxxx8" event={"ID":"ee0bd3ed-4af9-40a3-9742-ee548934f0c7","Type":"ContainerStarted","Data":"a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d"} Feb 17 14:22:29 crc kubenswrapper[4836]: I0217 14:22:29.125267 4836 generic.go:334] "Generic (PLEG): container finished" podID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerID="a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d" exitCode=0 Feb 17 14:22:29 crc kubenswrapper[4836]: I0217 14:22:29.125663 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxxx8" event={"ID":"ee0bd3ed-4af9-40a3-9742-ee548934f0c7","Type":"ContainerDied","Data":"a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d"} Feb 17 14:22:30 crc kubenswrapper[4836]: I0217 14:22:30.137004 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxxx8" event={"ID":"ee0bd3ed-4af9-40a3-9742-ee548934f0c7","Type":"ContainerStarted","Data":"130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c"} Feb 17 14:22:30 crc kubenswrapper[4836]: I0217 14:22:30.158271 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vxxx8" podStartSLOduration=1.781059736 podStartE2EDuration="4.158249716s" podCreationTimestamp="2026-02-17 14:22:26 +0000 UTC" firstStartedPulling="2026-02-17 14:22:27.108243121 +0000 UTC m=+973.451171390" lastFinishedPulling="2026-02-17 14:22:29.485433091 +0000 UTC m=+975.828361370" observedRunningTime="2026-02-17 14:22:30.152948761 +0000 UTC m=+976.495877040" watchObservedRunningTime="2026-02-17 14:22:30.158249716 +0000 UTC m=+976.501177985" Feb 17 14:22:33 crc kubenswrapper[4836]: I0217 14:22:33.745929 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69b9cbf5df-6fkqt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.591636 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-x257b"] Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.596176 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt"] Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.596431 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.596937 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.600264 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.600456 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.601517 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.601674 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cwnkd" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.605220 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt"] Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.693743 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pb5ff"] Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.694821 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.699468 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-md5x8" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.699463 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.699550 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.699517 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.706928 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-szl4j"] Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.708202 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.710383 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.721773 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-szl4j"] Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726471 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-reloader\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726559 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqr5\" (UniqueName: \"kubernetes.io/projected/e019f338-ff73-4160-a283-a71e9e6119b3-kube-api-access-jhqr5\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726656 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-frr-sockets\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726689 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e019f338-ff73-4160-a283-a71e9e6119b3-frr-startup\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726716 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e019f338-ff73-4160-a283-a71e9e6119b3-metrics-certs\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726730 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-metrics\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726751 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-frr-conf\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726777 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp7z7\" (UniqueName: \"kubernetes.io/projected/18ec2995-af0c-4c47-aa70-480f9323329e-kube-api-access-bp7z7\") pod \"frr-k8s-webhook-server-78b44bf5bb-mznjt\" (UID: \"18ec2995-af0c-4c47-aa70-480f9323329e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.726807 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18ec2995-af0c-4c47-aa70-480f9323329e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-mznjt\" (UID: \"18ec2995-af0c-4c47-aa70-480f9323329e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828273 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-frr-sockets\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828362 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e019f338-ff73-4160-a283-a71e9e6119b3-frr-startup\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828387 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e019f338-ff73-4160-a283-a71e9e6119b3-metrics-certs\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828404 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-metrics\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828423 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-frr-conf\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828448 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwzc\" (UniqueName: \"kubernetes.io/projected/2690ef6e-0489-43f3-b787-8b6c1295e283-kube-api-access-htwzc\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828473 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27eed55a-1a00-497e-9aa4-74f7007f336e-metrics-certs\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828515 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp7z7\" (UniqueName: \"kubernetes.io/projected/18ec2995-af0c-4c47-aa70-480f9323329e-kube-api-access-bp7z7\") pod \"frr-k8s-webhook-server-78b44bf5bb-mznjt\" (UID: \"18ec2995-af0c-4c47-aa70-480f9323329e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828552 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18ec2995-af0c-4c47-aa70-480f9323329e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-mznjt\" (UID: \"18ec2995-af0c-4c47-aa70-480f9323329e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828581 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27eed55a-1a00-497e-9aa4-74f7007f336e-cert\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828678 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-reloader\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828708 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-metrics-certs\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828850 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqr5\" (UniqueName: \"kubernetes.io/projected/e019f338-ff73-4160-a283-a71e9e6119b3-kube-api-access-jhqr5\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828884 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2690ef6e-0489-43f3-b787-8b6c1295e283-metallb-excludel2\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828918 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szs88\" (UniqueName: \"kubernetes.io/projected/27eed55a-1a00-497e-9aa4-74f7007f336e-kube-api-access-szs88\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.828955 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.829164 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-metrics\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.829181 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-reloader\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.829462 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-frr-sockets\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.829537 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e019f338-ff73-4160-a283-a71e9e6119b3-frr-conf\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.829575 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e019f338-ff73-4160-a283-a71e9e6119b3-frr-startup\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.836285 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e019f338-ff73-4160-a283-a71e9e6119b3-metrics-certs\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.837139 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18ec2995-af0c-4c47-aa70-480f9323329e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-mznjt\" (UID: \"18ec2995-af0c-4c47-aa70-480f9323329e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.855388 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp7z7\" (UniqueName: \"kubernetes.io/projected/18ec2995-af0c-4c47-aa70-480f9323329e-kube-api-access-bp7z7\") pod \"frr-k8s-webhook-server-78b44bf5bb-mznjt\" (UID: \"18ec2995-af0c-4c47-aa70-480f9323329e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.900770 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqr5\" (UniqueName: \"kubernetes.io/projected/e019f338-ff73-4160-a283-a71e9e6119b3-kube-api-access-jhqr5\") pod \"frr-k8s-x257b\" (UID: \"e019f338-ff73-4160-a283-a71e9e6119b3\") " pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.925696 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.930984 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htwzc\" (UniqueName: \"kubernetes.io/projected/2690ef6e-0489-43f3-b787-8b6c1295e283-kube-api-access-htwzc\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.931044 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27eed55a-1a00-497e-9aa4-74f7007f336e-metrics-certs\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.931828 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27eed55a-1a00-497e-9aa4-74f7007f336e-cert\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.931870 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-metrics-certs\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.931907 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2690ef6e-0489-43f3-b787-8b6c1295e283-metallb-excludel2\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.931932 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szs88\" (UniqueName: \"kubernetes.io/projected/27eed55a-1a00-497e-9aa4-74f7007f336e-kube-api-access-szs88\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.931974 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: E0217 14:22:34.932076 4836 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 14:22:34 crc kubenswrapper[4836]: E0217 14:22:34.932134 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist podName:2690ef6e-0489-43f3-b787-8b6c1295e283 nodeName:}" failed. No retries permitted until 2026-02-17 14:22:35.432108801 +0000 UTC m=+981.775037070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist") pod "speaker-pb5ff" (UID: "2690ef6e-0489-43f3-b787-8b6c1295e283") : secret "metallb-memberlist" not found Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.933051 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2690ef6e-0489-43f3-b787-8b6c1295e283-metallb-excludel2\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.935460 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-metrics-certs\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.935800 4836 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.936723 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27eed55a-1a00-497e-9aa4-74f7007f336e-metrics-certs\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.940745 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.945750 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27eed55a-1a00-497e-9aa4-74f7007f336e-cert\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.950351 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwzc\" (UniqueName: \"kubernetes.io/projected/2690ef6e-0489-43f3-b787-8b6c1295e283-kube-api-access-htwzc\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:34 crc kubenswrapper[4836]: I0217 14:22:34.954871 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szs88\" (UniqueName: \"kubernetes.io/projected/27eed55a-1a00-497e-9aa4-74f7007f336e-kube-api-access-szs88\") pod \"controller-69bbfbf88f-szl4j\" (UID: \"27eed55a-1a00-497e-9aa4-74f7007f336e\") " pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:35 crc kubenswrapper[4836]: I0217 14:22:35.024629 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:35 crc kubenswrapper[4836]: I0217 14:22:35.132059 4836 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:22:35 crc kubenswrapper[4836]: I0217 14:22:35.175098 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerStarted","Data":"74568734c97bb5c9fae50817ad229949aa021b4982838f7cc380326f6b22251f"} Feb 17 14:22:35 crc kubenswrapper[4836]: I0217 14:22:35.244918 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt"] Feb 17 14:22:35 crc kubenswrapper[4836]: W0217 14:22:35.250957 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ec2995_af0c_4c47_aa70_480f9323329e.slice/crio-10857b4d4594f8173323137f0eb07e6d97fa097637eeb77eb062b9d77fc891d9 WatchSource:0}: Error finding container 10857b4d4594f8173323137f0eb07e6d97fa097637eeb77eb062b9d77fc891d9: Status 404 returned error can't find the container with id 10857b4d4594f8173323137f0eb07e6d97fa097637eeb77eb062b9d77fc891d9 Feb 17 14:22:35 crc kubenswrapper[4836]: I0217 14:22:35.340315 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-szl4j"] Feb 17 14:22:35 crc kubenswrapper[4836]: W0217 14:22:35.342313 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27eed55a_1a00_497e_9aa4_74f7007f336e.slice/crio-349fcc2a13683cafd7df0f94207afba27f23b46ab5a2a33a05a8df4a0a33eb0b WatchSource:0}: Error finding container 349fcc2a13683cafd7df0f94207afba27f23b46ab5a2a33a05a8df4a0a33eb0b: Status 404 returned error can't find the container with id 349fcc2a13683cafd7df0f94207afba27f23b46ab5a2a33a05a8df4a0a33eb0b Feb 17 14:22:35 crc kubenswrapper[4836]: I0217 14:22:35.440399 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:35 crc kubenswrapper[4836]: E0217 14:22:35.440638 4836 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 14:22:35 crc kubenswrapper[4836]: E0217 14:22:35.440752 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist podName:2690ef6e-0489-43f3-b787-8b6c1295e283 nodeName:}" failed. No retries permitted until 2026-02-17 14:22:36.440732783 +0000 UTC m=+982.783661052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist") pod "speaker-pb5ff" (UID: "2690ef6e-0489-43f3-b787-8b6c1295e283") : secret "metallb-memberlist" not found Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.184722 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-szl4j" event={"ID":"27eed55a-1a00-497e-9aa4-74f7007f336e","Type":"ContainerStarted","Data":"e084d3dab6f69a37fb444957d942314d3bf90015779c0b410e5d662a99910549"} Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.185180 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-szl4j" event={"ID":"27eed55a-1a00-497e-9aa4-74f7007f336e","Type":"ContainerStarted","Data":"1bab3001db627bc25ecc180ac435d391eead6634a59401452067eaef7eb48f43"} Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.185223 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-szl4j" event={"ID":"27eed55a-1a00-497e-9aa4-74f7007f336e","Type":"ContainerStarted","Data":"349fcc2a13683cafd7df0f94207afba27f23b46ab5a2a33a05a8df4a0a33eb0b"} Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.185246 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.186453 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" event={"ID":"18ec2995-af0c-4c47-aa70-480f9323329e","Type":"ContainerStarted","Data":"10857b4d4594f8173323137f0eb07e6d97fa097637eeb77eb062b9d77fc891d9"} Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.206715 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-szl4j" podStartSLOduration=2.206691027 podStartE2EDuration="2.206691027s" podCreationTimestamp="2026-02-17 14:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:22:36.202792668 +0000 UTC m=+982.545720957" watchObservedRunningTime="2026-02-17 14:22:36.206691027 +0000 UTC m=+982.549619306" Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.498423 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.504414 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2690ef6e-0489-43f3-b787-8b6c1295e283-memberlist\") pod \"speaker-pb5ff\" (UID: \"2690ef6e-0489-43f3-b787-8b6c1295e283\") " pod="metallb-system/speaker-pb5ff" Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.511371 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pb5ff" Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.536149 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.536195 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:36 crc kubenswrapper[4836]: I0217 14:22:36.586129 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:37 crc kubenswrapper[4836]: I0217 14:22:37.245554 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pb5ff" event={"ID":"2690ef6e-0489-43f3-b787-8b6c1295e283","Type":"ContainerStarted","Data":"37b6e206e2beb70b9b6f25c3505a37f97c167cb7b97fc9f25540cac2014508ce"} Feb 17 14:22:37 crc kubenswrapper[4836]: I0217 14:22:37.245897 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pb5ff" event={"ID":"2690ef6e-0489-43f3-b787-8b6c1295e283","Type":"ContainerStarted","Data":"af41ecfd4980ad5a9b76d3887f65a9f91bf29d79cbea9d371fcff42dffe9b36e"} Feb 17 14:22:37 crc kubenswrapper[4836]: I0217 14:22:37.300074 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:37 crc kubenswrapper[4836]: I0217 14:22:37.441646 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxxx8"] Feb 17 14:22:38 crc kubenswrapper[4836]: I0217 14:22:38.268639 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pb5ff" event={"ID":"2690ef6e-0489-43f3-b787-8b6c1295e283","Type":"ContainerStarted","Data":"8e8f7d7b2105a43f43460c23011fa0cfe81eff9f63ca393e833b54a7665842dd"} Feb 17 14:22:38 crc kubenswrapper[4836]: I0217 14:22:38.296700 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pb5ff" podStartSLOduration=4.296681382 podStartE2EDuration="4.296681382s" podCreationTimestamp="2026-02-17 14:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:22:38.293363037 +0000 UTC m=+984.636291306" watchObservedRunningTime="2026-02-17 14:22:38.296681382 +0000 UTC m=+984.639609661" Feb 17 14:22:39 crc kubenswrapper[4836]: I0217 14:22:39.276309 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vxxx8" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerName="registry-server" containerID="cri-o://130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c" gracePeriod=2 Feb 17 14:22:39 crc kubenswrapper[4836]: I0217 14:22:39.276594 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pb5ff" Feb 17 14:22:39 crc kubenswrapper[4836]: I0217 14:22:39.904106 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.062902 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-utilities\") pod \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.063016 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-catalog-content\") pod \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.063254 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4jss\" (UniqueName: \"kubernetes.io/projected/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-kube-api-access-q4jss\") pod \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\" (UID: \"ee0bd3ed-4af9-40a3-9742-ee548934f0c7\") " Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.064628 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-utilities" (OuterVolumeSpecName: "utilities") pod "ee0bd3ed-4af9-40a3-9742-ee548934f0c7" (UID: "ee0bd3ed-4af9-40a3-9742-ee548934f0c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.082617 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-kube-api-access-q4jss" (OuterVolumeSpecName: "kube-api-access-q4jss") pod "ee0bd3ed-4af9-40a3-9742-ee548934f0c7" (UID: "ee0bd3ed-4af9-40a3-9742-ee548934f0c7"). InnerVolumeSpecName "kube-api-access-q4jss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.118728 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee0bd3ed-4af9-40a3-9742-ee548934f0c7" (UID: "ee0bd3ed-4af9-40a3-9742-ee548934f0c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.165811 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4jss\" (UniqueName: \"kubernetes.io/projected/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-kube-api-access-q4jss\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.165873 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.165887 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee0bd3ed-4af9-40a3-9742-ee548934f0c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.286705 4836 generic.go:334] "Generic (PLEG): container finished" podID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerID="130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c" exitCode=0 Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.286802 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxxx8" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.286802 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxxx8" event={"ID":"ee0bd3ed-4af9-40a3-9742-ee548934f0c7","Type":"ContainerDied","Data":"130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c"} Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.287006 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxxx8" event={"ID":"ee0bd3ed-4af9-40a3-9742-ee548934f0c7","Type":"ContainerDied","Data":"cb2c2b42c4e66e02d5c1a234e004888a9a328e8e2b0673f2fef499c320e33d68"} Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.287036 4836 scope.go:117] "RemoveContainer" containerID="130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.327786 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxxx8"] Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.334876 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vxxx8"] Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.503848 4836 scope.go:117] "RemoveContainer" containerID="a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.556151 4836 scope.go:117] "RemoveContainer" containerID="87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.584373 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" path="/var/lib/kubelet/pods/ee0bd3ed-4af9-40a3-9742-ee548934f0c7/volumes" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.601442 4836 scope.go:117] "RemoveContainer" containerID="130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c" Feb 17 14:22:40 crc kubenswrapper[4836]: E0217 14:22:40.603364 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c\": container with ID starting with 130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c not found: ID does not exist" containerID="130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.603429 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c"} err="failed to get container status \"130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c\": rpc error: code = NotFound desc = could not find container \"130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c\": container with ID starting with 130efc368435db6a4abd3d1e4331eadbf9c826809151887de231fde45fba1e8c not found: ID does not exist" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.603462 4836 scope.go:117] "RemoveContainer" containerID="a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d" Feb 17 14:22:40 crc kubenswrapper[4836]: E0217 14:22:40.603952 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d\": container with ID starting with a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d not found: ID does not exist" containerID="a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.604004 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d"} err="failed to get container status \"a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d\": rpc error: code = NotFound desc = could not find container \"a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d\": container with ID starting with a437392fc716ac4c181c21aee412d30402f80e7fb51c5e7b4a9ec90fd620087d not found: ID does not exist" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.604023 4836 scope.go:117] "RemoveContainer" containerID="87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03" Feb 17 14:22:40 crc kubenswrapper[4836]: E0217 14:22:40.604258 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03\": container with ID starting with 87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03 not found: ID does not exist" containerID="87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03" Feb 17 14:22:40 crc kubenswrapper[4836]: I0217 14:22:40.604281 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03"} err="failed to get container status \"87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03\": rpc error: code = NotFound desc = could not find container \"87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03\": container with ID starting with 87a0d8ab30fd50050f265f555c2ac0eecf86edd798af11bdde6d9dab11cbbc03 not found: ID does not exist" Feb 17 14:22:45 crc kubenswrapper[4836]: I0217 14:22:45.037621 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-szl4j" Feb 17 14:22:46 crc kubenswrapper[4836]: I0217 14:22:46.340976 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" event={"ID":"18ec2995-af0c-4c47-aa70-480f9323329e","Type":"ContainerStarted","Data":"1c0920ccfc9e03a93f65617bd3331b91335257e2cf2ec0423fbdf07325adcfa0"} Feb 17 14:22:46 crc kubenswrapper[4836]: I0217 14:22:46.341370 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:22:46 crc kubenswrapper[4836]: I0217 14:22:46.344890 4836 generic.go:334] "Generic (PLEG): container finished" podID="e019f338-ff73-4160-a283-a71e9e6119b3" containerID="1077f06d3d5bebd01c4ffbd75e580dc56862da4c37f0070d4820446af9de47f9" exitCode=0 Feb 17 14:22:46 crc kubenswrapper[4836]: I0217 14:22:46.344926 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerDied","Data":"1077f06d3d5bebd01c4ffbd75e580dc56862da4c37f0070d4820446af9de47f9"} Feb 17 14:22:46 crc kubenswrapper[4836]: I0217 14:22:46.356729 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" podStartSLOduration=1.952169526 podStartE2EDuration="12.356711521s" podCreationTimestamp="2026-02-17 14:22:34 +0000 UTC" firstStartedPulling="2026-02-17 14:22:35.256821831 +0000 UTC m=+981.599750100" lastFinishedPulling="2026-02-17 14:22:45.661363826 +0000 UTC m=+992.004292095" observedRunningTime="2026-02-17 14:22:46.355393357 +0000 UTC m=+992.698321636" watchObservedRunningTime="2026-02-17 14:22:46.356711521 +0000 UTC m=+992.699639790" Feb 17 14:22:46 crc kubenswrapper[4836]: I0217 14:22:46.515220 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pb5ff" Feb 17 14:22:47 crc kubenswrapper[4836]: I0217 14:22:47.352313 4836 generic.go:334] "Generic (PLEG): container finished" podID="e019f338-ff73-4160-a283-a71e9e6119b3" containerID="6c83f84061b3a004002402735bacc0efc2a93e244ec45cd30c47231cb2afda75" exitCode=0 Feb 17 14:22:47 crc kubenswrapper[4836]: I0217 14:22:47.352412 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerDied","Data":"6c83f84061b3a004002402735bacc0efc2a93e244ec45cd30c47231cb2afda75"} Feb 17 14:22:48 crc kubenswrapper[4836]: I0217 14:22:48.361062 4836 generic.go:334] "Generic (PLEG): container finished" podID="e019f338-ff73-4160-a283-a71e9e6119b3" containerID="d6a140b47bc595c4e0bd540c1b87ceab5fb212382476eab090184203a2d6d60d" exitCode=0 Feb 17 14:22:48 crc kubenswrapper[4836]: I0217 14:22:48.361185 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerDied","Data":"d6a140b47bc595c4e0bd540c1b87ceab5fb212382476eab090184203a2d6d60d"} Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.374797 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerStarted","Data":"4221d330bfed77c0bd8ca6e95a47e5241b742b1538b0cdbcb61b7daf649469e1"} Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.375161 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerStarted","Data":"7e2f69401c03e46067356f09eec6841704c994a0693fde52e64c714223d21f28"} Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.375174 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerStarted","Data":"8f78fdf8808d9c7835a9f6e5c6bc93f71345184c3cdc90101be5000f422fb1e0"} Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.375188 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerStarted","Data":"e1961ec1710c9130f42a4d633b94b4200234719ebfee80baa144c2765eb9402e"} Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.375198 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerStarted","Data":"1da60767d90fabb62748e8dc88f8c69ccdfa89a32f1b48da2d4d8a96b7c55407"} Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.620546 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f2nk9"] Feb 17 14:22:49 crc kubenswrapper[4836]: E0217 14:22:49.620871 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerName="registry-server" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.620892 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerName="registry-server" Feb 17 14:22:49 crc kubenswrapper[4836]: E0217 14:22:49.620910 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerName="extract-utilities" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.620918 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerName="extract-utilities" Feb 17 14:22:49 crc kubenswrapper[4836]: E0217 14:22:49.620933 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerName="extract-content" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.620941 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerName="extract-content" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.621090 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0bd3ed-4af9-40a3-9742-ee548934f0c7" containerName="registry-server" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.621647 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f2nk9" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.624812 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ntfqc" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.624824 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.627928 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.639369 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f2nk9"] Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.777517 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9v82\" (UniqueName: \"kubernetes.io/projected/edb91fa8-3288-4ae3-b355-7cb7849c1d8d-kube-api-access-n9v82\") pod \"openstack-operator-index-f2nk9\" (UID: \"edb91fa8-3288-4ae3-b355-7cb7849c1d8d\") " pod="openstack-operators/openstack-operator-index-f2nk9" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.879106 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9v82\" (UniqueName: \"kubernetes.io/projected/edb91fa8-3288-4ae3-b355-7cb7849c1d8d-kube-api-access-n9v82\") pod \"openstack-operator-index-f2nk9\" (UID: \"edb91fa8-3288-4ae3-b355-7cb7849c1d8d\") " pod="openstack-operators/openstack-operator-index-f2nk9" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.899189 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9v82\" (UniqueName: \"kubernetes.io/projected/edb91fa8-3288-4ae3-b355-7cb7849c1d8d-kube-api-access-n9v82\") pod \"openstack-operator-index-f2nk9\" (UID: \"edb91fa8-3288-4ae3-b355-7cb7849c1d8d\") " pod="openstack-operators/openstack-operator-index-f2nk9" Feb 17 14:22:49 crc kubenswrapper[4836]: I0217 14:22:49.940373 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f2nk9" Feb 17 14:22:50 crc kubenswrapper[4836]: I0217 14:22:50.429871 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x257b" event={"ID":"e019f338-ff73-4160-a283-a71e9e6119b3","Type":"ContainerStarted","Data":"973eef3da60090899d57696150c36ebae2399dddc4b5043b47f2d6aed253dbad"} Feb 17 14:22:50 crc kubenswrapper[4836]: I0217 14:22:50.430702 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:50 crc kubenswrapper[4836]: I0217 14:22:50.463697 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-x257b" podStartSLOduration=5.953230552 podStartE2EDuration="16.463675511s" podCreationTimestamp="2026-02-17 14:22:34 +0000 UTC" firstStartedPulling="2026-02-17 14:22:35.131803262 +0000 UTC m=+981.474731531" lastFinishedPulling="2026-02-17 14:22:45.642248221 +0000 UTC m=+991.985176490" observedRunningTime="2026-02-17 14:22:50.46088622 +0000 UTC m=+996.803814509" watchObservedRunningTime="2026-02-17 14:22:50.463675511 +0000 UTC m=+996.806603780" Feb 17 14:22:50 crc kubenswrapper[4836]: I0217 14:22:50.654744 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f2nk9"] Feb 17 14:22:50 crc kubenswrapper[4836]: W0217 14:22:50.665509 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedb91fa8_3288_4ae3_b355_7cb7849c1d8d.slice/crio-f2c61c3b6d7068ca074c8d1380495038c7afe3ece774b912db9f11ca0f35855c WatchSource:0}: Error finding container f2c61c3b6d7068ca074c8d1380495038c7afe3ece774b912db9f11ca0f35855c: Status 404 returned error can't find the container with id f2c61c3b6d7068ca074c8d1380495038c7afe3ece774b912db9f11ca0f35855c Feb 17 14:22:51 crc kubenswrapper[4836]: I0217 14:22:51.501878 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f2nk9" event={"ID":"edb91fa8-3288-4ae3-b355-7cb7849c1d8d","Type":"ContainerStarted","Data":"f2c61c3b6d7068ca074c8d1380495038c7afe3ece774b912db9f11ca0f35855c"} Feb 17 14:22:51 crc kubenswrapper[4836]: I0217 14:22:51.575662 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-f2nk9"] Feb 17 14:22:51 crc kubenswrapper[4836]: I0217 14:22:51.982653 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pz5pz"] Feb 17 14:22:51 crc kubenswrapper[4836]: I0217 14:22:51.983696 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:22:51 crc kubenswrapper[4836]: I0217 14:22:51.994498 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pz5pz"] Feb 17 14:22:52 crc kubenswrapper[4836]: I0217 14:22:52.162254 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lxf8\" (UniqueName: \"kubernetes.io/projected/f0982db9-e1ef-4fc9-b7d4-e52ac91e6676-kube-api-access-4lxf8\") pod \"openstack-operator-index-pz5pz\" (UID: \"f0982db9-e1ef-4fc9-b7d4-e52ac91e6676\") " pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:22:52 crc kubenswrapper[4836]: I0217 14:22:52.263751 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lxf8\" (UniqueName: \"kubernetes.io/projected/f0982db9-e1ef-4fc9-b7d4-e52ac91e6676-kube-api-access-4lxf8\") pod \"openstack-operator-index-pz5pz\" (UID: \"f0982db9-e1ef-4fc9-b7d4-e52ac91e6676\") " pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:22:52 crc kubenswrapper[4836]: I0217 14:22:52.291135 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lxf8\" (UniqueName: \"kubernetes.io/projected/f0982db9-e1ef-4fc9-b7d4-e52ac91e6676-kube-api-access-4lxf8\") pod \"openstack-operator-index-pz5pz\" (UID: \"f0982db9-e1ef-4fc9-b7d4-e52ac91e6676\") " pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:22:52 crc kubenswrapper[4836]: I0217 14:22:52.302258 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:22:53 crc kubenswrapper[4836]: I0217 14:22:53.386196 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pz5pz"] Feb 17 14:22:53 crc kubenswrapper[4836]: I0217 14:22:53.516021 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f2nk9" event={"ID":"edb91fa8-3288-4ae3-b355-7cb7849c1d8d","Type":"ContainerStarted","Data":"da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f"} Feb 17 14:22:53 crc kubenswrapper[4836]: I0217 14:22:53.516190 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-f2nk9" podUID="edb91fa8-3288-4ae3-b355-7cb7849c1d8d" containerName="registry-server" containerID="cri-o://da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f" gracePeriod=2 Feb 17 14:22:53 crc kubenswrapper[4836]: I0217 14:22:53.519287 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pz5pz" event={"ID":"f0982db9-e1ef-4fc9-b7d4-e52ac91e6676","Type":"ContainerStarted","Data":"8548eb24eb9b2410813d3e9fd6c73a7876264a12a1b057b176e8f75d28a659eb"} Feb 17 14:22:53 crc kubenswrapper[4836]: I0217 14:22:53.538678 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f2nk9" podStartSLOduration=1.946951591 podStartE2EDuration="4.538657044s" podCreationTimestamp="2026-02-17 14:22:49 +0000 UTC" firstStartedPulling="2026-02-17 14:22:50.668169134 +0000 UTC m=+997.011097403" lastFinishedPulling="2026-02-17 14:22:53.259874557 +0000 UTC m=+999.602802856" observedRunningTime="2026-02-17 14:22:53.533762419 +0000 UTC m=+999.876690708" watchObservedRunningTime="2026-02-17 14:22:53.538657044 +0000 UTC m=+999.881585313" Feb 17 14:22:53 crc kubenswrapper[4836]: I0217 14:22:53.940992 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f2nk9" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.196566 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9v82\" (UniqueName: \"kubernetes.io/projected/edb91fa8-3288-4ae3-b355-7cb7849c1d8d-kube-api-access-n9v82\") pod \"edb91fa8-3288-4ae3-b355-7cb7849c1d8d\" (UID: \"edb91fa8-3288-4ae3-b355-7cb7849c1d8d\") " Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.203651 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb91fa8-3288-4ae3-b355-7cb7849c1d8d-kube-api-access-n9v82" (OuterVolumeSpecName: "kube-api-access-n9v82") pod "edb91fa8-3288-4ae3-b355-7cb7849c1d8d" (UID: "edb91fa8-3288-4ae3-b355-7cb7849c1d8d"). InnerVolumeSpecName "kube-api-access-n9v82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.298846 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9v82\" (UniqueName: \"kubernetes.io/projected/edb91fa8-3288-4ae3-b355-7cb7849c1d8d-kube-api-access-n9v82\") on node \"crc\" DevicePath \"\"" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.529600 4836 generic.go:334] "Generic (PLEG): container finished" podID="edb91fa8-3288-4ae3-b355-7cb7849c1d8d" containerID="da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f" exitCode=0 Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.529639 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f2nk9" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.529658 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f2nk9" event={"ID":"edb91fa8-3288-4ae3-b355-7cb7849c1d8d","Type":"ContainerDied","Data":"da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f"} Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.530040 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f2nk9" event={"ID":"edb91fa8-3288-4ae3-b355-7cb7849c1d8d","Type":"ContainerDied","Data":"f2c61c3b6d7068ca074c8d1380495038c7afe3ece774b912db9f11ca0f35855c"} Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.530064 4836 scope.go:117] "RemoveContainer" containerID="da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.532996 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pz5pz" event={"ID":"f0982db9-e1ef-4fc9-b7d4-e52ac91e6676","Type":"ContainerStarted","Data":"ab0fdfa98b6bc72d92c461dbb33cd68bef9f51986312eb90d88af739d4355f06"} Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.551900 4836 scope.go:117] "RemoveContainer" containerID="da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f" Feb 17 14:22:54 crc kubenswrapper[4836]: E0217 14:22:54.552998 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f\": container with ID starting with da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f not found: ID does not exist" containerID="da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.553057 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f"} err="failed to get container status \"da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f\": rpc error: code = NotFound desc = could not find container \"da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f\": container with ID starting with da23f8997cf70d61317c32ce04b51763484e9e9080e955955480e6e878568c8f not found: ID does not exist" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.559533 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pz5pz" podStartSLOduration=3.513959275 podStartE2EDuration="3.55950842s" podCreationTimestamp="2026-02-17 14:22:51 +0000 UTC" firstStartedPulling="2026-02-17 14:22:53.399377303 +0000 UTC m=+999.742305572" lastFinishedPulling="2026-02-17 14:22:53.444926448 +0000 UTC m=+999.787854717" observedRunningTime="2026-02-17 14:22:54.552107761 +0000 UTC m=+1000.895036030" watchObservedRunningTime="2026-02-17 14:22:54.55950842 +0000 UTC m=+1000.902436689" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.590467 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-f2nk9"] Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.590514 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-f2nk9"] Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.927272 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:54 crc kubenswrapper[4836]: I0217 14:22:54.967327 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-x257b" Feb 17 14:22:56 crc kubenswrapper[4836]: I0217 14:22:56.578082 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb91fa8-3288-4ae3-b355-7cb7849c1d8d" path="/var/lib/kubelet/pods/edb91fa8-3288-4ae3-b355-7cb7849c1d8d/volumes" Feb 17 14:22:59 crc kubenswrapper[4836]: I0217 14:22:59.765148 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:22:59 crc kubenswrapper[4836]: I0217 14:22:59.765558 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:23:02 crc kubenswrapper[4836]: I0217 14:23:02.357674 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:23:02 crc kubenswrapper[4836]: I0217 14:23:02.358018 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:23:02 crc kubenswrapper[4836]: I0217 14:23:02.433173 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:23:02 crc kubenswrapper[4836]: I0217 14:23:02.619841 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-pz5pz" Feb 17 14:23:04 crc kubenswrapper[4836]: I0217 14:23:04.932787 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-x257b" Feb 17 14:23:04 crc kubenswrapper[4836]: I0217 14:23:04.945606 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-mznjt" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.593838 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm"] Feb 17 14:23:10 crc kubenswrapper[4836]: E0217 14:23:10.595747 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb91fa8-3288-4ae3-b355-7cb7849c1d8d" containerName="registry-server" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.595784 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb91fa8-3288-4ae3-b355-7cb7849c1d8d" containerName="registry-server" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.595934 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb91fa8-3288-4ae3-b355-7cb7849c1d8d" containerName="registry-server" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.596919 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.600924 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2zt29" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.607041 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm"] Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.709956 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4g2g\" (UniqueName: \"kubernetes.io/projected/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-kube-api-access-s4g2g\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.710477 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-util\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.710514 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-bundle\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.812078 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4g2g\" (UniqueName: \"kubernetes.io/projected/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-kube-api-access-s4g2g\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.812190 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-util\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.812232 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-bundle\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.813150 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-bundle\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.813372 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-util\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.835868 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4g2g\" (UniqueName: \"kubernetes.io/projected/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-kube-api-access-s4g2g\") pod \"609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:10 crc kubenswrapper[4836]: I0217 14:23:10.933718 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:11 crc kubenswrapper[4836]: I0217 14:23:11.450478 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm"] Feb 17 14:23:11 crc kubenswrapper[4836]: W0217 14:23:11.460405 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc1ca64e_8914_44ae_8d9e_d7c63ba6e166.slice/crio-0ab5a5cd357a3e9c8bf32c08f6dc8b2f32cd15e6e31d32610488207b53a77062 WatchSource:0}: Error finding container 0ab5a5cd357a3e9c8bf32c08f6dc8b2f32cd15e6e31d32610488207b53a77062: Status 404 returned error can't find the container with id 0ab5a5cd357a3e9c8bf32c08f6dc8b2f32cd15e6e31d32610488207b53a77062 Feb 17 14:23:11 crc kubenswrapper[4836]: I0217 14:23:11.669062 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" event={"ID":"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166","Type":"ContainerStarted","Data":"b660291d58fd283d9879c66a66dc2fa63506daff41626c13a2530e7ee2f8b4f6"} Feb 17 14:23:11 crc kubenswrapper[4836]: I0217 14:23:11.669739 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" event={"ID":"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166","Type":"ContainerStarted","Data":"0ab5a5cd357a3e9c8bf32c08f6dc8b2f32cd15e6e31d32610488207b53a77062"} Feb 17 14:23:12 crc kubenswrapper[4836]: I0217 14:23:12.676597 4836 generic.go:334] "Generic (PLEG): container finished" podID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerID="b660291d58fd283d9879c66a66dc2fa63506daff41626c13a2530e7ee2f8b4f6" exitCode=0 Feb 17 14:23:12 crc kubenswrapper[4836]: I0217 14:23:12.676652 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" event={"ID":"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166","Type":"ContainerDied","Data":"b660291d58fd283d9879c66a66dc2fa63506daff41626c13a2530e7ee2f8b4f6"} Feb 17 14:23:13 crc kubenswrapper[4836]: I0217 14:23:13.686143 4836 generic.go:334] "Generic (PLEG): container finished" podID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerID="3c6d713658574434865f5b99cb3e7536bfe66ce51a3f0c64ae27f13273de57c6" exitCode=0 Feb 17 14:23:13 crc kubenswrapper[4836]: I0217 14:23:13.686236 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" event={"ID":"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166","Type":"ContainerDied","Data":"3c6d713658574434865f5b99cb3e7536bfe66ce51a3f0c64ae27f13273de57c6"} Feb 17 14:23:14 crc kubenswrapper[4836]: I0217 14:23:14.697449 4836 generic.go:334] "Generic (PLEG): container finished" podID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerID="3ac8a1b6212e43d3b69fdc02f6daca525345d9fe092da6fec178ed9daccd3f4e" exitCode=0 Feb 17 14:23:14 crc kubenswrapper[4836]: I0217 14:23:14.697636 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" event={"ID":"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166","Type":"ContainerDied","Data":"3ac8a1b6212e43d3b69fdc02f6daca525345d9fe092da6fec178ed9daccd3f4e"} Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.020421 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.127982 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-util\") pod \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.128142 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-bundle\") pod \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.128311 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4g2g\" (UniqueName: \"kubernetes.io/projected/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-kube-api-access-s4g2g\") pod \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\" (UID: \"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166\") " Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.129607 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-bundle" (OuterVolumeSpecName: "bundle") pod "dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" (UID: "dc1ca64e-8914-44ae-8d9e-d7c63ba6e166"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.134715 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-kube-api-access-s4g2g" (OuterVolumeSpecName: "kube-api-access-s4g2g") pod "dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" (UID: "dc1ca64e-8914-44ae-8d9e-d7c63ba6e166"). InnerVolumeSpecName "kube-api-access-s4g2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.141909 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-util" (OuterVolumeSpecName: "util") pod "dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" (UID: "dc1ca64e-8914-44ae-8d9e-d7c63ba6e166"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.230171 4836 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-util\") on node \"crc\" DevicePath \"\"" Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.230225 4836 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.230246 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4g2g\" (UniqueName: \"kubernetes.io/projected/dc1ca64e-8914-44ae-8d9e-d7c63ba6e166-kube-api-access-s4g2g\") on node \"crc\" DevicePath \"\"" Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.715968 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" event={"ID":"dc1ca64e-8914-44ae-8d9e-d7c63ba6e166","Type":"ContainerDied","Data":"0ab5a5cd357a3e9c8bf32c08f6dc8b2f32cd15e6e31d32610488207b53a77062"} Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.716017 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab5a5cd357a3e9c8bf32c08f6dc8b2f32cd15e6e31d32610488207b53a77062" Feb 17 14:23:16 crc kubenswrapper[4836]: I0217 14:23:16.716048 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm" Feb 17 14:23:22 crc kubenswrapper[4836]: I0217 14:23:22.839143 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk"] Feb 17 14:23:22 crc kubenswrapper[4836]: E0217 14:23:22.839968 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerName="extract" Feb 17 14:23:22 crc kubenswrapper[4836]: I0217 14:23:22.839982 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerName="extract" Feb 17 14:23:22 crc kubenswrapper[4836]: E0217 14:23:22.840003 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerName="pull" Feb 17 14:23:22 crc kubenswrapper[4836]: I0217 14:23:22.840009 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerName="pull" Feb 17 14:23:22 crc kubenswrapper[4836]: E0217 14:23:22.840028 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerName="util" Feb 17 14:23:22 crc kubenswrapper[4836]: I0217 14:23:22.840034 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerName="util" Feb 17 14:23:22 crc kubenswrapper[4836]: I0217 14:23:22.840148 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1ca64e-8914-44ae-8d9e-d7c63ba6e166" containerName="extract" Feb 17 14:23:22 crc kubenswrapper[4836]: I0217 14:23:22.840594 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" Feb 17 14:23:22 crc kubenswrapper[4836]: I0217 14:23:22.851581 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-pl9gf" Feb 17 14:23:22 crc kubenswrapper[4836]: I0217 14:23:22.862245 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk"] Feb 17 14:23:23 crc kubenswrapper[4836]: I0217 14:23:23.007963 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf6rd\" (UniqueName: \"kubernetes.io/projected/4afa09e7-5273-4170-8c40-6c3ed66e6b8e-kube-api-access-sf6rd\") pod \"openstack-operator-controller-init-7464dc569f-6nqxk\" (UID: \"4afa09e7-5273-4170-8c40-6c3ed66e6b8e\") " pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" Feb 17 14:23:23 crc kubenswrapper[4836]: I0217 14:23:23.109636 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf6rd\" (UniqueName: \"kubernetes.io/projected/4afa09e7-5273-4170-8c40-6c3ed66e6b8e-kube-api-access-sf6rd\") pod \"openstack-operator-controller-init-7464dc569f-6nqxk\" (UID: \"4afa09e7-5273-4170-8c40-6c3ed66e6b8e\") " pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" Feb 17 14:23:23 crc kubenswrapper[4836]: I0217 14:23:23.129015 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf6rd\" (UniqueName: \"kubernetes.io/projected/4afa09e7-5273-4170-8c40-6c3ed66e6b8e-kube-api-access-sf6rd\") pod \"openstack-operator-controller-init-7464dc569f-6nqxk\" (UID: \"4afa09e7-5273-4170-8c40-6c3ed66e6b8e\") " pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" Feb 17 14:23:23 crc kubenswrapper[4836]: I0217 14:23:23.159772 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" Feb 17 14:23:23 crc kubenswrapper[4836]: I0217 14:23:23.646619 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk"] Feb 17 14:23:23 crc kubenswrapper[4836]: I0217 14:23:23.905856 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" event={"ID":"4afa09e7-5273-4170-8c40-6c3ed66e6b8e","Type":"ContainerStarted","Data":"030e0ff9289543c8e0946be59093d692b43a81a2959df444698c62084e3d15c3"} Feb 17 14:23:29 crc kubenswrapper[4836]: I0217 14:23:29.765658 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:23:29 crc kubenswrapper[4836]: I0217 14:23:29.766056 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:23:30 crc kubenswrapper[4836]: I0217 14:23:30.252625 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" event={"ID":"4afa09e7-5273-4170-8c40-6c3ed66e6b8e","Type":"ContainerStarted","Data":"23eea63b1b0347ac660ddb33113cf73ec732e17d4cb9cae17340f007d044eb4f"} Feb 17 14:23:30 crc kubenswrapper[4836]: I0217 14:23:30.252799 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" Feb 17 14:23:30 crc kubenswrapper[4836]: I0217 14:23:30.288194 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" podStartSLOduration=2.216200984 podStartE2EDuration="8.288173353s" podCreationTimestamp="2026-02-17 14:23:22 +0000 UTC" firstStartedPulling="2026-02-17 14:23:23.680590857 +0000 UTC m=+1030.023519116" lastFinishedPulling="2026-02-17 14:23:29.752563216 +0000 UTC m=+1036.095491485" observedRunningTime="2026-02-17 14:23:30.283788541 +0000 UTC m=+1036.626716860" watchObservedRunningTime="2026-02-17 14:23:30.288173353 +0000 UTC m=+1036.631101632" Feb 17 14:23:43 crc kubenswrapper[4836]: I0217 14:23:43.163401 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7464dc569f-6nqxk" Feb 17 14:23:59 crc kubenswrapper[4836]: I0217 14:23:59.765863 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:23:59 crc kubenswrapper[4836]: I0217 14:23:59.766901 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:23:59 crc kubenswrapper[4836]: I0217 14:23:59.766997 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:23:59 crc kubenswrapper[4836]: I0217 14:23:59.768111 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89b78e4cc2264dc06417ab903dd2a1618c1aee2c1d950babae0b011a2e9eac59"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:23:59 crc kubenswrapper[4836]: I0217 14:23:59.768221 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://89b78e4cc2264dc06417ab903dd2a1618c1aee2c1d950babae0b011a2e9eac59" gracePeriod=600 Feb 17 14:24:00 crc kubenswrapper[4836]: I0217 14:24:00.461819 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="89b78e4cc2264dc06417ab903dd2a1618c1aee2c1d950babae0b011a2e9eac59" exitCode=0 Feb 17 14:24:00 crc kubenswrapper[4836]: I0217 14:24:00.461884 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"89b78e4cc2264dc06417ab903dd2a1618c1aee2c1d950babae0b011a2e9eac59"} Feb 17 14:24:00 crc kubenswrapper[4836]: I0217 14:24:00.462171 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"790067b54b3531952a7756a09b793da1fc53330ef71b8011e59f530ae444594e"} Feb 17 14:24:00 crc kubenswrapper[4836]: I0217 14:24:00.462197 4836 scope.go:117] "RemoveContainer" containerID="d7f43ee4be167fb696d056804834f76d74b6a96b2dd00fc7f1328e7b9c2e7869" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.006084 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.009033 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.016173 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-gxh86" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.019574 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-54696"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.020873 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.030617 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kg9lq" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.045429 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-54696"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.058309 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.093442 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.095069 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.110443 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-zxb25"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.111490 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.124692 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zhj69" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.125012 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6xkvk" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.159927 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.161369 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.169135 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cptpb" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.186110 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v292x\" (UniqueName: \"kubernetes.io/projected/12cff299-e5ea-40a9-8a69-528c478cd0a0-kube-api-access-v292x\") pod \"cinder-operator-controller-manager-5d946d989d-b6cfm\" (UID: \"12cff299-e5ea-40a9-8a69-528c478cd0a0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.186168 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcc2j\" (UniqueName: \"kubernetes.io/projected/a7c6acc7-4243-4c0d-a723-e83dc2e054df-kube-api-access-xcc2j\") pod \"barbican-operator-controller-manager-868647ff47-54696\" (UID: \"a7c6acc7-4243-4c0d-a723-e83dc2e054df\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.186318 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.255116 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-zxb25"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.275366 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.276428 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.287491 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v292x\" (UniqueName: \"kubernetes.io/projected/12cff299-e5ea-40a9-8a69-528c478cd0a0-kube-api-access-v292x\") pod \"cinder-operator-controller-manager-5d946d989d-b6cfm\" (UID: \"12cff299-e5ea-40a9-8a69-528c478cd0a0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.287565 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcc2j\" (UniqueName: \"kubernetes.io/projected/a7c6acc7-4243-4c0d-a723-e83dc2e054df-kube-api-access-xcc2j\") pod \"barbican-operator-controller-manager-868647ff47-54696\" (UID: \"a7c6acc7-4243-4c0d-a723-e83dc2e054df\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.287625 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvkhm\" (UniqueName: \"kubernetes.io/projected/ce77a6a5-95bb-4758-8a38-cdc354fd9d6c-kube-api-access-pvkhm\") pod \"glance-operator-controller-manager-77987464f4-zxb25\" (UID: \"ce77a6a5-95bb-4758-8a38-cdc354fd9d6c\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.287673 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmz8x\" (UniqueName: \"kubernetes.io/projected/0962ca43-43c4-4884-bd8e-889835f83632-kube-api-access-bmz8x\") pod \"designate-operator-controller-manager-6d8bf5c495-8wdwr\" (UID: \"0962ca43-43c4-4884-bd8e-889835f83632\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.287709 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5dzj\" (UniqueName: \"kubernetes.io/projected/c3d9def3-7f53-4acc-9c46-d37ddf41e3b7-kube-api-access-s5dzj\") pod \"heat-operator-controller-manager-69f49c598c-7vwdd\" (UID: \"c3d9def3-7f53-4acc-9c46-d37ddf41e3b7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.296374 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-kjtcf" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.296623 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.345392 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.351054 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcc2j\" (UniqueName: \"kubernetes.io/projected/a7c6acc7-4243-4c0d-a723-e83dc2e054df-kube-api-access-xcc2j\") pod \"barbican-operator-controller-manager-868647ff47-54696\" (UID: \"a7c6acc7-4243-4c0d-a723-e83dc2e054df\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.356033 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v292x\" (UniqueName: \"kubernetes.io/projected/12cff299-e5ea-40a9-8a69-528c478cd0a0-kube-api-access-v292x\") pod \"cinder-operator-controller-manager-5d946d989d-b6cfm\" (UID: \"12cff299-e5ea-40a9-8a69-528c478cd0a0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.362237 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.390092 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvkhm\" (UniqueName: \"kubernetes.io/projected/ce77a6a5-95bb-4758-8a38-cdc354fd9d6c-kube-api-access-pvkhm\") pod \"glance-operator-controller-manager-77987464f4-zxb25\" (UID: \"ce77a6a5-95bb-4758-8a38-cdc354fd9d6c\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.390213 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmz8x\" (UniqueName: \"kubernetes.io/projected/0962ca43-43c4-4884-bd8e-889835f83632-kube-api-access-bmz8x\") pod \"designate-operator-controller-manager-6d8bf5c495-8wdwr\" (UID: \"0962ca43-43c4-4884-bd8e-889835f83632\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.390268 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5dzj\" (UniqueName: \"kubernetes.io/projected/c3d9def3-7f53-4acc-9c46-d37ddf41e3b7-kube-api-access-s5dzj\") pod \"heat-operator-controller-manager-69f49c598c-7vwdd\" (UID: \"c3d9def3-7f53-4acc-9c46-d37ddf41e3b7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.390371 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmrdc\" (UniqueName: \"kubernetes.io/projected/f2e6ac9f-ee72-4a28-b298-9b2f918d0c95-kube-api-access-xmrdc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bv7s8\" (UID: \"f2e6ac9f-ee72-4a28-b298-9b2f918d0c95\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.391561 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.401766 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.403440 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.435590 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.435923 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s5464" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.460743 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.487629 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvkhm\" (UniqueName: \"kubernetes.io/projected/ce77a6a5-95bb-4758-8a38-cdc354fd9d6c-kube-api-access-pvkhm\") pod \"glance-operator-controller-manager-77987464f4-zxb25\" (UID: \"ce77a6a5-95bb-4758-8a38-cdc354fd9d6c\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.489212 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmz8x\" (UniqueName: \"kubernetes.io/projected/0962ca43-43c4-4884-bd8e-889835f83632-kube-api-access-bmz8x\") pod \"designate-operator-controller-manager-6d8bf5c495-8wdwr\" (UID: \"0962ca43-43c4-4884-bd8e-889835f83632\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.508804 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.509007 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crsk\" (UniqueName: \"kubernetes.io/projected/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-kube-api-access-9crsk\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.509077 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmrdc\" (UniqueName: \"kubernetes.io/projected/f2e6ac9f-ee72-4a28-b298-9b2f918d0c95-kube-api-access-xmrdc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bv7s8\" (UID: \"f2e6ac9f-ee72-4a28-b298-9b2f918d0c95\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.528240 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5dzj\" (UniqueName: \"kubernetes.io/projected/c3d9def3-7f53-4acc-9c46-d37ddf41e3b7-kube-api-access-s5dzj\") pod \"heat-operator-controller-manager-69f49c598c-7vwdd\" (UID: \"c3d9def3-7f53-4acc-9c46-d37ddf41e3b7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.552346 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.553865 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.568447 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4jqnd" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.574006 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmrdc\" (UniqueName: \"kubernetes.io/projected/f2e6ac9f-ee72-4a28-b298-9b2f918d0c95-kube-api-access-xmrdc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bv7s8\" (UID: \"f2e6ac9f-ee72-4a28-b298-9b2f918d0c95\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.574083 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.575018 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.594708 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6wzf8" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.597127 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.611331 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.613577 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.613782 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crsk\" (UniqueName: \"kubernetes.io/projected/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-kube-api-access-9crsk\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:13 crc kubenswrapper[4836]: E0217 14:24:13.615882 4836 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:13 crc kubenswrapper[4836]: E0217 14:24:13.615985 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert podName:a1ae24b8-83c8-416d-9d39-24d84eb6cd83 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:14.115940765 +0000 UTC m=+1080.458869034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert") pod "infra-operator-controller-manager-79d975b745-f4fvp" (UID: "a1ae24b8-83c8-416d-9d39-24d84eb6cd83") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.623651 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.626479 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.630670 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-plc5f" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.630985 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.639521 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.642695 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.648346 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rrpf5" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.658100 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crsk\" (UniqueName: \"kubernetes.io/projected/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-kube-api-access-9crsk\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.656026 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.669512 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.678016 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.679458 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.686612 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-lw755" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.692047 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.776975 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.778900 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.781910 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ct5wd" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.787458 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.795374 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nnnd\" (UniqueName: \"kubernetes.io/projected/e805966b-ea22-4c2a-a6c4-3622300fcb2f-kube-api-access-4nnnd\") pod \"ironic-operator-controller-manager-554564d7fc-k9p46\" (UID: \"e805966b-ea22-4c2a-a6c4-3622300fcb2f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.796009 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhcrg\" (UniqueName: \"kubernetes.io/projected/18a63480-edc2-44ed-bd43-b7750f7f8f33-kube-api-access-fhcrg\") pod \"keystone-operator-controller-manager-b4d948c87-qnb5b\" (UID: \"18a63480-edc2-44ed-bd43-b7750f7f8f33\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.796448 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.796636 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.806816 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8jl\" (UniqueName: \"kubernetes.io/projected/9ccd7ed5-2772-4482-af31-2578e98011fd-kube-api-access-wn8jl\") pod \"manila-operator-controller-manager-54f6768c69-6lzts\" (UID: \"9ccd7ed5-2772-4482-af31-2578e98011fd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.812326 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.916128 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcrg\" (UniqueName: \"kubernetes.io/projected/18a63480-edc2-44ed-bd43-b7750f7f8f33-kube-api-access-fhcrg\") pod \"keystone-operator-controller-manager-b4d948c87-qnb5b\" (UID: \"18a63480-edc2-44ed-bd43-b7750f7f8f33\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.916217 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8jl\" (UniqueName: \"kubernetes.io/projected/9ccd7ed5-2772-4482-af31-2578e98011fd-kube-api-access-wn8jl\") pod \"manila-operator-controller-manager-54f6768c69-6lzts\" (UID: \"9ccd7ed5-2772-4482-af31-2578e98011fd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.950382 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm"] Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.951851 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.916288 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nnnd\" (UniqueName: \"kubernetes.io/projected/e805966b-ea22-4c2a-a6c4-3622300fcb2f-kube-api-access-4nnnd\") pod \"ironic-operator-controller-manager-554564d7fc-k9p46\" (UID: \"e805966b-ea22-4c2a-a6c4-3622300fcb2f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.957575 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwb95\" (UniqueName: \"kubernetes.io/projected/3d12b131-73a0-477e-ab9e-579309b0f5b1-kube-api-access-mwb95\") pod \"neutron-operator-controller-manager-64ddbf8bb-6c4rn\" (UID: \"3d12b131-73a0-477e-ab9e-579309b0f5b1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.957624 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2cc\" (UniqueName: \"kubernetes.io/projected/7b9749c7-038f-4814-9357-623346c9172c-kube-api-access-6x2cc\") pod \"mariadb-operator-controller-manager-6994f66f48-zkzrs\" (UID: \"7b9749c7-038f-4814-9357-623346c9172c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.959736 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4f74j" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.974696 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8jl\" (UniqueName: \"kubernetes.io/projected/9ccd7ed5-2772-4482-af31-2578e98011fd-kube-api-access-wn8jl\") pod \"manila-operator-controller-manager-54f6768c69-6lzts\" (UID: \"9ccd7ed5-2772-4482-af31-2578e98011fd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.977882 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcrg\" (UniqueName: \"kubernetes.io/projected/18a63480-edc2-44ed-bd43-b7750f7f8f33-kube-api-access-fhcrg\") pod \"keystone-operator-controller-manager-b4d948c87-qnb5b\" (UID: \"18a63480-edc2-44ed-bd43-b7750f7f8f33\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.980381 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nnnd\" (UniqueName: \"kubernetes.io/projected/e805966b-ea22-4c2a-a6c4-3622300fcb2f-kube-api-access-4nnnd\") pod \"ironic-operator-controller-manager-554564d7fc-k9p46\" (UID: \"e805966b-ea22-4c2a-a6c4-3622300fcb2f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.992475 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" Feb 17 14:24:13 crc kubenswrapper[4836]: I0217 14:24:13.993217 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.059100 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxl4z\" (UniqueName: \"kubernetes.io/projected/52a90e1a-0e2d-4488-8a1a-34de15bfa3a5-kube-api-access-jxl4z\") pod \"nova-operator-controller-manager-567668f5cf-5hz7c\" (UID: \"52a90e1a-0e2d-4488-8a1a-34de15bfa3a5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.059233 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwb95\" (UniqueName: \"kubernetes.io/projected/3d12b131-73a0-477e-ab9e-579309b0f5b1-kube-api-access-mwb95\") pod \"neutron-operator-controller-manager-64ddbf8bb-6c4rn\" (UID: \"3d12b131-73a0-477e-ab9e-579309b0f5b1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.059268 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x2cc\" (UniqueName: \"kubernetes.io/projected/7b9749c7-038f-4814-9357-623346c9172c-kube-api-access-6x2cc\") pod \"mariadb-operator-controller-manager-6994f66f48-zkzrs\" (UID: \"7b9749c7-038f-4814-9357-623346c9172c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.059316 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxx68\" (UniqueName: \"kubernetes.io/projected/1bb12b86-1f25-4dd9-a44d-449a6deee701-kube-api-access-rxx68\") pod \"octavia-operator-controller-manager-69f8888797-llzlm\" (UID: \"1bb12b86-1f25-4dd9-a44d-449a6deee701\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.093384 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwb95\" (UniqueName: \"kubernetes.io/projected/3d12b131-73a0-477e-ab9e-579309b0f5b1-kube-api-access-mwb95\") pod \"neutron-operator-controller-manager-64ddbf8bb-6c4rn\" (UID: \"3d12b131-73a0-477e-ab9e-579309b0f5b1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.113369 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x2cc\" (UniqueName: \"kubernetes.io/projected/7b9749c7-038f-4814-9357-623346c9172c-kube-api-access-6x2cc\") pod \"mariadb-operator-controller-manager-6994f66f48-zkzrs\" (UID: \"7b9749c7-038f-4814-9357-623346c9172c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.125026 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.126654 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.133212 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.133531 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sz5r7" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.138163 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.146819 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.156404 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.161248 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxx68\" (UniqueName: \"kubernetes.io/projected/1bb12b86-1f25-4dd9-a44d-449a6deee701-kube-api-access-rxx68\") pod \"octavia-operator-controller-manager-69f8888797-llzlm\" (UID: \"1bb12b86-1f25-4dd9-a44d-449a6deee701\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.161370 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.161496 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxl4z\" (UniqueName: \"kubernetes.io/projected/52a90e1a-0e2d-4488-8a1a-34de15bfa3a5-kube-api-access-jxl4z\") pod \"nova-operator-controller-manager-567668f5cf-5hz7c\" (UID: \"52a90e1a-0e2d-4488-8a1a-34de15bfa3a5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" Feb 17 14:24:14 crc kubenswrapper[4836]: E0217 14:24:14.162252 4836 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.170977 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.172178 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" Feb 17 14:24:14 crc kubenswrapper[4836]: E0217 14:24:14.173453 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert podName:a1ae24b8-83c8-416d-9d39-24d84eb6cd83 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:15.173422866 +0000 UTC m=+1081.516351125 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert") pod "infra-operator-controller-manager-79d975b745-f4fvp" (UID: "a1ae24b8-83c8-416d-9d39-24d84eb6cd83") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.176567 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-p4gww" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.178378 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-p7w4w" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.178521 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.182787 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.184957 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.191642 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxl4z\" (UniqueName: \"kubernetes.io/projected/52a90e1a-0e2d-4488-8a1a-34de15bfa3a5-kube-api-access-jxl4z\") pod \"nova-operator-controller-manager-567668f5cf-5hz7c\" (UID: \"52a90e1a-0e2d-4488-8a1a-34de15bfa3a5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.192710 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxx68\" (UniqueName: \"kubernetes.io/projected/1bb12b86-1f25-4dd9-a44d-449a6deee701-kube-api-access-rxx68\") pod \"octavia-operator-controller-manager-69f8888797-llzlm\" (UID: \"1bb12b86-1f25-4dd9-a44d-449a6deee701\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.204762 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bwcmk" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.208553 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.228181 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.237666 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.241394 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.250475 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7k58q" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.257355 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.261752 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.265058 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzgjp\" (UniqueName: \"kubernetes.io/projected/4affaaf4-1113-4635-b30f-da26e04f6662-kube-api-access-fzgjp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.265134 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb76g\" (UniqueName: \"kubernetes.io/projected/cf7c4631-b19a-4160-8581-15f72869a60b-kube-api-access-fb76g\") pod \"placement-operator-controller-manager-8497b45c89-jnxzt\" (UID: \"cf7c4631-b19a-4160-8581-15f72869a60b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.265338 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.265397 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6mg\" (UniqueName: \"kubernetes.io/projected/d0c3c41c-ac60-40f0-bdfb-8fe641c9426a-kube-api-access-kv6mg\") pod \"swift-operator-controller-manager-68f46476f-7ktgs\" (UID: \"d0c3c41c-ac60-40f0-bdfb-8fe641c9426a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.265444 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqpqt\" (UniqueName: \"kubernetes.io/projected/f6ba6343-872d-4e36-accf-959bb437f82d-kube-api-access-xqpqt\") pod \"ovn-operator-controller-manager-d44cf6b75-mq76b\" (UID: \"f6ba6343-872d-4e36-accf-959bb437f82d\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.318392 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.319474 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.346146 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.367065 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.367136 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv6mg\" (UniqueName: \"kubernetes.io/projected/d0c3c41c-ac60-40f0-bdfb-8fe641c9426a-kube-api-access-kv6mg\") pod \"swift-operator-controller-manager-68f46476f-7ktgs\" (UID: \"d0c3c41c-ac60-40f0-bdfb-8fe641c9426a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.367184 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqpqt\" (UniqueName: \"kubernetes.io/projected/f6ba6343-872d-4e36-accf-959bb437f82d-kube-api-access-xqpqt\") pod \"ovn-operator-controller-manager-d44cf6b75-mq76b\" (UID: \"f6ba6343-872d-4e36-accf-959bb437f82d\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.367244 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzgjp\" (UniqueName: \"kubernetes.io/projected/4affaaf4-1113-4635-b30f-da26e04f6662-kube-api-access-fzgjp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.367283 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb76g\" (UniqueName: \"kubernetes.io/projected/cf7c4631-b19a-4160-8581-15f72869a60b-kube-api-access-fb76g\") pod \"placement-operator-controller-manager-8497b45c89-jnxzt\" (UID: \"cf7c4631-b19a-4160-8581-15f72869a60b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.367434 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzgtd\" (UniqueName: \"kubernetes.io/projected/a3c22d9b-6ba0-4dd2-861d-8685c18e9330-kube-api-access-mzgtd\") pod \"telemetry-operator-controller-manager-6d6964fcdb-rbq62\" (UID: \"a3c22d9b-6ba0-4dd2-861d-8685c18e9330\") " pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" Feb 17 14:24:14 crc kubenswrapper[4836]: E0217 14:24:14.367684 4836 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:14 crc kubenswrapper[4836]: E0217 14:24:14.367769 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert podName:4affaaf4-1113-4635-b30f-da26e04f6662 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:14.86773602 +0000 UTC m=+1081.210664289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" (UID: "4affaaf4-1113-4635-b30f-da26e04f6662") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.374434 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ztvz2"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.375986 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.389731 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-qgk6f" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.397617 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.401310 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzgjp\" (UniqueName: \"kubernetes.io/projected/4affaaf4-1113-4635-b30f-da26e04f6662-kube-api-access-fzgjp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.403914 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.418743 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqpqt\" (UniqueName: \"kubernetes.io/projected/f6ba6343-872d-4e36-accf-959bb437f82d-kube-api-access-xqpqt\") pod \"ovn-operator-controller-manager-d44cf6b75-mq76b\" (UID: \"f6ba6343-872d-4e36-accf-959bb437f82d\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.428388 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv6mg\" (UniqueName: \"kubernetes.io/projected/d0c3c41c-ac60-40f0-bdfb-8fe641c9426a-kube-api-access-kv6mg\") pod \"swift-operator-controller-manager-68f46476f-7ktgs\" (UID: \"d0c3c41c-ac60-40f0-bdfb-8fe641c9426a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.428555 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb76g\" (UniqueName: \"kubernetes.io/projected/cf7c4631-b19a-4160-8581-15f72869a60b-kube-api-access-fb76g\") pod \"placement-operator-controller-manager-8497b45c89-jnxzt\" (UID: \"cf7c4631-b19a-4160-8581-15f72869a60b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.469405 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzgtd\" (UniqueName: \"kubernetes.io/projected/a3c22d9b-6ba0-4dd2-861d-8685c18e9330-kube-api-access-mzgtd\") pod \"telemetry-operator-controller-manager-6d6964fcdb-rbq62\" (UID: \"a3c22d9b-6ba0-4dd2-861d-8685c18e9330\") " pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.470157 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gnl\" (UniqueName: \"kubernetes.io/projected/d4aa765a-0f56-4f05-b02f-f041841bc97d-kube-api-access-j9gnl\") pod \"test-operator-controller-manager-7866795846-ztvz2\" (UID: \"d4aa765a-0f56-4f05-b02f-f041841bc97d\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.482342 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ztvz2"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.492232 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzgtd\" (UniqueName: \"kubernetes.io/projected/a3c22d9b-6ba0-4dd2-861d-8685c18e9330-kube-api-access-mzgtd\") pod \"telemetry-operator-controller-manager-6d6964fcdb-rbq62\" (UID: \"a3c22d9b-6ba0-4dd2-861d-8685c18e9330\") " pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.527544 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.529480 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.534798 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nvvvm" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.534878 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-p4gww" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.539481 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.544837 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.565634 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-p7w4w" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.571633 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9gnl\" (UniqueName: \"kubernetes.io/projected/d4aa765a-0f56-4f05-b02f-f041841bc97d-kube-api-access-j9gnl\") pod \"test-operator-controller-manager-7866795846-ztvz2\" (UID: \"d4aa765a-0f56-4f05-b02f-f041841bc97d\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.571737 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6h8\" (UniqueName: \"kubernetes.io/projected/1f238b1a-4c0c-45de-bb7a-12946f426b89-kube-api-access-nf6h8\") pod \"watcher-operator-controller-manager-5db88f68c-lmtng\" (UID: \"1f238b1a-4c0c-45de-bb7a-12946f426b89\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.574469 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.602213 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bwcmk" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.606833 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9gnl\" (UniqueName: \"kubernetes.io/projected/d4aa765a-0f56-4f05-b02f-f041841bc97d-kube-api-access-j9gnl\") pod \"test-operator-controller-manager-7866795846-ztvz2\" (UID: \"d4aa765a-0f56-4f05-b02f-f041841bc97d\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.617629 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.619768 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7k58q" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.628773 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.679413 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6h8\" (UniqueName: \"kubernetes.io/projected/1f238b1a-4c0c-45de-bb7a-12946f426b89-kube-api-access-nf6h8\") pod \"watcher-operator-controller-manager-5db88f68c-lmtng\" (UID: \"1f238b1a-4c0c-45de-bb7a-12946f426b89\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.794131 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.804857 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6h8\" (UniqueName: \"kubernetes.io/projected/1f238b1a-4c0c-45de-bb7a-12946f426b89-kube-api-access-nf6h8\") pod \"watcher-operator-controller-manager-5db88f68c-lmtng\" (UID: \"1f238b1a-4c0c-45de-bb7a-12946f426b89\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.816798 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-qgk6f" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.817886 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.817955 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" event={"ID":"12cff299-e5ea-40a9-8a69-528c478cd0a0","Type":"ContainerStarted","Data":"f0e2544dbf0606b5855b9877ed7f1c369ec341b58e357490987b5a8c45726507"} Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.818002 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.818681 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.819055 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.823711 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.826746 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.827010 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zv8s6" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.827134 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.827271 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7vtfx" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.828815 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.828818 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.892699 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wzm\" (UniqueName: \"kubernetes.io/projected/d423f7ba-2751-4d99-8102-3bc52b302161-kube-api-access-c7wzm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w4dds\" (UID: \"d423f7ba-2751-4d99-8102-3bc52b302161\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.892770 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.892851 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.892920 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msdtf\" (UniqueName: \"kubernetes.io/projected/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-kube-api-access-msdtf\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.894058 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:14 crc kubenswrapper[4836]: E0217 14:24:14.894429 4836 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:14 crc kubenswrapper[4836]: E0217 14:24:14.894509 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert podName:4affaaf4-1113-4635-b30f-da26e04f6662 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:15.894481193 +0000 UTC m=+1082.237409462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" (UID: "4affaaf4-1113-4635-b30f-da26e04f6662") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.942747 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm"] Feb 17 14:24:14 crc kubenswrapper[4836]: I0217 14:24:14.950175 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-54696"] Feb 17 14:24:14 crc kubenswrapper[4836]: W0217 14:24:14.955196 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2e6ac9f_ee72_4a28_b298_9b2f918d0c95.slice/crio-1c5266b110bd54fbb84c71c41cbfe10738eb0ef0054b9bc9159f134c8b2ea0dc WatchSource:0}: Error finding container 1c5266b110bd54fbb84c71c41cbfe10738eb0ef0054b9bc9159f134c8b2ea0dc: Status 404 returned error can't find the container with id 1c5266b110bd54fbb84c71c41cbfe10738eb0ef0054b9bc9159f134c8b2ea0dc Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:14.994890 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:14.995285 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msdtf\" (UniqueName: \"kubernetes.io/projected/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-kube-api-access-msdtf\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:14.995415 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7wzm\" (UniqueName: \"kubernetes.io/projected/d423f7ba-2751-4d99-8102-3bc52b302161-kube-api-access-c7wzm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w4dds\" (UID: \"d423f7ba-2751-4d99-8102-3bc52b302161\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:14.995453 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:14.995497 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:14.995736 4836 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:14.995829 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:15.49580387 +0000 UTC m=+1081.838732139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "webhook-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:14.996380 4836 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:14.996409 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:15.496400657 +0000 UTC m=+1081.839328926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "metrics-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.039543 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msdtf\" (UniqueName: \"kubernetes.io/projected/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-kube-api-access-msdtf\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.043969 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7wzm\" (UniqueName: \"kubernetes.io/projected/d423f7ba-2751-4d99-8102-3bc52b302161-kube-api-access-c7wzm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w4dds\" (UID: \"d423f7ba-2751-4d99-8102-3bc52b302161\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.048149 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8"] Feb 17 14:24:15 crc kubenswrapper[4836]: W0217 14:24:15.066028 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3d9def3_7f53_4acc_9c46_d37ddf41e3b7.slice/crio-f26465dbce79c4b5ef61f928c2a02c31f40927ec75eefaf99a897962ce499a52 WatchSource:0}: Error finding container f26465dbce79c4b5ef61f928c2a02c31f40927ec75eefaf99a897962ce499a52: Status 404 returned error can't find the container with id f26465dbce79c4b5ef61f928c2a02c31f40927ec75eefaf99a897962ce499a52 Feb 17 14:24:15 crc kubenswrapper[4836]: W0217 14:24:15.083769 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce77a6a5_95bb_4758_8a38_cdc354fd9d6c.slice/crio-d824721a82bfdf10244745d8cd3c55eae95498b0fa8b0395b92cd93df225144b WatchSource:0}: Error finding container d824721a82bfdf10244745d8cd3c55eae95498b0fa8b0395b92cd93df225144b: Status 404 returned error can't find the container with id d824721a82bfdf10244745d8cd3c55eae95498b0fa8b0395b92cd93df225144b Feb 17 14:24:15 crc kubenswrapper[4836]: W0217 14:24:15.087366 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ccd7ed5_2772_4482_af31_2578e98011fd.slice/crio-d8e81765f872a70ad232d847ea502382138fc1e8a48c4df928c04a6c2002df5e WatchSource:0}: Error finding container d8e81765f872a70ad232d847ea502382138fc1e8a48c4df928c04a6c2002df5e: Status 404 returned error can't find the container with id d8e81765f872a70ad232d847ea502382138fc1e8a48c4df928c04a6c2002df5e Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.095453 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-zxb25"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.107216 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.113925 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.198794 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.199103 4836 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.199231 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert podName:a1ae24b8-83c8-416d-9d39-24d84eb6cd83 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:17.199199706 +0000 UTC m=+1083.542127975 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert") pod "infra-operator-controller-manager-79d975b745-f4fvp" (UID: "a1ae24b8-83c8-416d-9d39-24d84eb6cd83") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.250184 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.298870 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.513855 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.514599 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.514500 4836 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.514850 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:16.514828588 +0000 UTC m=+1082.857756857 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "metrics-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.519041 4836 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.519185 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:16.519151723 +0000 UTC m=+1082.862079982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "webhook-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.647025 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.666117 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.700779 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.711138 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.719993 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.736614 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs"] Feb 17 14:24:15 crc kubenswrapper[4836]: W0217 14:24:15.772686 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bb12b86_1f25_4dd9_a44d_449a6deee701.slice/crio-40b90be755541b9ddeaa9549b544f011bdd8e2685e2138b6bdb095591a633653 WatchSource:0}: Error finding container 40b90be755541b9ddeaa9549b544f011bdd8e2685e2138b6bdb095591a633653: Status 404 returned error can't find the container with id 40b90be755541b9ddeaa9549b544f011bdd8e2685e2138b6bdb095591a633653 Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.819607 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" event={"ID":"3d12b131-73a0-477e-ab9e-579309b0f5b1","Type":"ContainerStarted","Data":"c8912a4bc2b101eba8bbee5f1f7afc6d900d84969b246a0e6abb7d5c0cd1df2d"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.831497 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" event={"ID":"9ccd7ed5-2772-4482-af31-2578e98011fd","Type":"ContainerStarted","Data":"d8e81765f872a70ad232d847ea502382138fc1e8a48c4df928c04a6c2002df5e"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.841758 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.849481 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.849677 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" event={"ID":"0962ca43-43c4-4884-bd8e-889835f83632","Type":"ContainerStarted","Data":"f394b4f4f43975965ffb40a146c483db0820fddb1dafee6dde3e2b1a9ffb53f9"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.863856 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" event={"ID":"52a90e1a-0e2d-4488-8a1a-34de15bfa3a5","Type":"ContainerStarted","Data":"7b277bafe1bdb6ad7d8d85eb8eb55e3fed5a8cf1ca8b1e29105ae1ba7b762ecc"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.869019 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" event={"ID":"1bb12b86-1f25-4dd9-a44d-449a6deee701","Type":"ContainerStarted","Data":"40b90be755541b9ddeaa9549b544f011bdd8e2685e2138b6bdb095591a633653"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.873341 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt"] Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.882778 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" event={"ID":"e805966b-ea22-4c2a-a6c4-3622300fcb2f","Type":"ContainerStarted","Data":"ad645d8a39c485ac1537e0c873a6462efe04d66ebbd924bc5bbc4a3bfa35f8c3"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.889542 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" event={"ID":"a7c6acc7-4243-4c0d-a723-e83dc2e054df","Type":"ContainerStarted","Data":"ec05a892d51c7f23653352e29351486f6303b3c71b861fa1a6b3fc41171fa4c0"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.894303 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" event={"ID":"f6ba6343-872d-4e36-accf-959bb437f82d","Type":"ContainerStarted","Data":"dcbc61f55de5c6a8bc8c1201190948ae939f79d04e713c062f127c47cea3b8d2"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.904510 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" event={"ID":"c3d9def3-7f53-4acc-9c46-d37ddf41e3b7","Type":"ContainerStarted","Data":"f26465dbce79c4b5ef61f928c2a02c31f40927ec75eefaf99a897962ce499a52"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.910360 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" event={"ID":"f2e6ac9f-ee72-4a28-b298-9b2f918d0c95","Type":"ContainerStarted","Data":"1c5266b110bd54fbb84c71c41cbfe10738eb0ef0054b9bc9159f134c8b2ea0dc"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.914240 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" event={"ID":"ce77a6a5-95bb-4758-8a38-cdc354fd9d6c","Type":"ContainerStarted","Data":"d824721a82bfdf10244745d8cd3c55eae95498b0fa8b0395b92cd93df225144b"} Feb 17 14:24:15 crc kubenswrapper[4836]: I0217 14:24:15.924102 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.924415 4836 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.924483 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert podName:4affaaf4-1113-4635-b30f-da26e04f6662 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:17.924466615 +0000 UTC m=+1084.267394894 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" (UID: "4affaaf4-1113-4635-b30f-da26e04f6662") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:15 crc kubenswrapper[4836]: W0217 14:24:15.941448 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf7c4631_b19a_4160_8581_15f72869a60b.slice/crio-34932151fc3b56e1b5a94958b6f702e6c96c37cce853efb4e3e302718aa28b8e WatchSource:0}: Error finding container 34932151fc3b56e1b5a94958b6f702e6c96c37cce853efb4e3e302718aa28b8e: Status 404 returned error can't find the container with id 34932151fc3b56e1b5a94958b6f702e6c96c37cce853efb4e3e302718aa28b8e Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.948026 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fb76g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-jnxzt_openstack-operators(cf7c4631-b19a-4160-8581-15f72869a60b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 14:24:15 crc kubenswrapper[4836]: E0217 14:24:15.952816 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" podUID="cf7c4631-b19a-4160-8581-15f72869a60b" Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.009141 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs"] Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.065863 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ztvz2"] Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.184152 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds"] Feb 17 14:24:16 crc kubenswrapper[4836]: E0217 14:24:16.221562 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c7wzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-w4dds_openstack-operators(d423f7ba-2751-4d99-8102-3bc52b302161): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 14:24:16 crc kubenswrapper[4836]: E0217 14:24:16.226371 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" podUID="d423f7ba-2751-4d99-8102-3bc52b302161" Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.550383 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.550487 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:16 crc kubenswrapper[4836]: E0217 14:24:16.550709 4836 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:24:16 crc kubenswrapper[4836]: E0217 14:24:16.550800 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:18.550775898 +0000 UTC m=+1084.893704167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "webhook-server-cert" not found Feb 17 14:24:16 crc kubenswrapper[4836]: E0217 14:24:16.550861 4836 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:24:16 crc kubenswrapper[4836]: E0217 14:24:16.550957 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:18.550878431 +0000 UTC m=+1084.893806700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "metrics-server-cert" not found Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.931701 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" event={"ID":"d4aa765a-0f56-4f05-b02f-f041841bc97d","Type":"ContainerStarted","Data":"d5e146a6374cb5855c0a40aff2316fe3979a5e72b86ecab58274bc46f95f188c"} Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.934533 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" event={"ID":"1f238b1a-4c0c-45de-bb7a-12946f426b89","Type":"ContainerStarted","Data":"12dc4cadb3992c9e1df317fc358c26ece1879c305e74ffac515fe389fd8acb19"} Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.936945 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" event={"ID":"d423f7ba-2751-4d99-8102-3bc52b302161","Type":"ContainerStarted","Data":"3285ec10c0066df96f2e89a02367d01cdf3c81fba042953fde3602d36311330c"} Feb 17 14:24:16 crc kubenswrapper[4836]: E0217 14:24:16.940638 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" podUID="d423f7ba-2751-4d99-8102-3bc52b302161" Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.941986 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" event={"ID":"a3c22d9b-6ba0-4dd2-861d-8685c18e9330","Type":"ContainerStarted","Data":"f02de2dd0dafacea8b5f5229718cea7a98f2cc7503b70fda904a4167ac903dfe"} Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.944502 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" event={"ID":"7b9749c7-038f-4814-9357-623346c9172c","Type":"ContainerStarted","Data":"030565104d8040ddd5a1d3e05506bef295ad0ac9ff14c0cf1fd7cc6a3d83ae01"} Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.946554 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" event={"ID":"cf7c4631-b19a-4160-8581-15f72869a60b","Type":"ContainerStarted","Data":"34932151fc3b56e1b5a94958b6f702e6c96c37cce853efb4e3e302718aa28b8e"} Feb 17 14:24:16 crc kubenswrapper[4836]: E0217 14:24:16.950211 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" podUID="cf7c4631-b19a-4160-8581-15f72869a60b" Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.965650 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" event={"ID":"d0c3c41c-ac60-40f0-bdfb-8fe641c9426a","Type":"ContainerStarted","Data":"58778d7f45a38abceed1743a34f03d923fd31e400a10b56b2222e8be90be5561"} Feb 17 14:24:16 crc kubenswrapper[4836]: I0217 14:24:16.982228 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" event={"ID":"18a63480-edc2-44ed-bd43-b7750f7f8f33","Type":"ContainerStarted","Data":"35919d25fec64b955968200dd4f791a38595555b32f227a2f54c29ec986a4484"} Feb 17 14:24:17 crc kubenswrapper[4836]: I0217 14:24:17.264523 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:17 crc kubenswrapper[4836]: E0217 14:24:17.264862 4836 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:17 crc kubenswrapper[4836]: E0217 14:24:17.264989 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert podName:a1ae24b8-83c8-416d-9d39-24d84eb6cd83 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:21.264949612 +0000 UTC m=+1087.607878051 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert") pod "infra-operator-controller-manager-79d975b745-f4fvp" (UID: "a1ae24b8-83c8-416d-9d39-24d84eb6cd83") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:17 crc kubenswrapper[4836]: I0217 14:24:17.979947 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:17 crc kubenswrapper[4836]: E0217 14:24:17.980469 4836 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:17 crc kubenswrapper[4836]: E0217 14:24:17.980620 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert podName:4affaaf4-1113-4635-b30f-da26e04f6662 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:21.980596814 +0000 UTC m=+1088.323525093 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" (UID: "4affaaf4-1113-4635-b30f-da26e04f6662") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:18 crc kubenswrapper[4836]: E0217 14:24:18.002964 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" podUID="cf7c4631-b19a-4160-8581-15f72869a60b" Feb 17 14:24:18 crc kubenswrapper[4836]: E0217 14:24:18.027698 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" podUID="d423f7ba-2751-4d99-8102-3bc52b302161" Feb 17 14:24:18 crc kubenswrapper[4836]: I0217 14:24:18.626679 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:18 crc kubenswrapper[4836]: I0217 14:24:18.627086 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:18 crc kubenswrapper[4836]: E0217 14:24:18.627156 4836 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:24:18 crc kubenswrapper[4836]: E0217 14:24:18.627257 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:22.62723233 +0000 UTC m=+1088.970160599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "webhook-server-cert" not found Feb 17 14:24:18 crc kubenswrapper[4836]: E0217 14:24:18.627403 4836 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:24:18 crc kubenswrapper[4836]: E0217 14:24:18.627490 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:22.627464246 +0000 UTC m=+1088.970392725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "metrics-server-cert" not found Feb 17 14:24:21 crc kubenswrapper[4836]: I0217 14:24:21.284244 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:21 crc kubenswrapper[4836]: E0217 14:24:21.284490 4836 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:21 crc kubenswrapper[4836]: E0217 14:24:21.284798 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert podName:a1ae24b8-83c8-416d-9d39-24d84eb6cd83 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:29.284773961 +0000 UTC m=+1095.627702240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert") pod "infra-operator-controller-manager-79d975b745-f4fvp" (UID: "a1ae24b8-83c8-416d-9d39-24d84eb6cd83") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:24:21 crc kubenswrapper[4836]: I0217 14:24:21.995066 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:21 crc kubenswrapper[4836]: E0217 14:24:21.995276 4836 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:21 crc kubenswrapper[4836]: E0217 14:24:21.995375 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert podName:4affaaf4-1113-4635-b30f-da26e04f6662 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:29.995356748 +0000 UTC m=+1096.338285017 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" (UID: "4affaaf4-1113-4635-b30f-da26e04f6662") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:24:22 crc kubenswrapper[4836]: I0217 14:24:22.704751 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:22 crc kubenswrapper[4836]: I0217 14:24:22.705397 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:22 crc kubenswrapper[4836]: E0217 14:24:22.705553 4836 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:24:22 crc kubenswrapper[4836]: E0217 14:24:22.705605 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:30.705590578 +0000 UTC m=+1097.048518847 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "webhook-server-cert" not found Feb 17 14:24:22 crc kubenswrapper[4836]: E0217 14:24:22.706060 4836 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:24:22 crc kubenswrapper[4836]: E0217 14:24:22.706084 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs podName:ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48 nodeName:}" failed. No retries permitted until 2026-02-17 14:24:30.70607644 +0000 UTC m=+1097.049004709 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs") pod "openstack-operator-controller-manager-667f54696f-kskgn" (UID: "ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48") : secret "metrics-server-cert" not found Feb 17 14:24:29 crc kubenswrapper[4836]: E0217 14:24:29.247057 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 17 14:24:29 crc kubenswrapper[4836]: E0217 14:24:29.247949 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wn8jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-6lzts_openstack-operators(9ccd7ed5-2772-4482-af31-2578e98011fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:29 crc kubenswrapper[4836]: E0217 14:24:29.249190 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" podUID="9ccd7ed5-2772-4482-af31-2578e98011fd" Feb 17 14:24:29 crc kubenswrapper[4836]: I0217 14:24:29.321403 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:29 crc kubenswrapper[4836]: I0217 14:24:29.344160 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ae24b8-83c8-416d-9d39-24d84eb6cd83-cert\") pod \"infra-operator-controller-manager-79d975b745-f4fvp\" (UID: \"a1ae24b8-83c8-416d-9d39-24d84eb6cd83\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:29 crc kubenswrapper[4836]: I0217 14:24:29.506931 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s5464" Feb 17 14:24:29 crc kubenswrapper[4836]: I0217 14:24:29.514363 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.033842 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.041190 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4affaaf4-1113-4635-b30f-da26e04f6662-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht\" (UID: \"4affaaf4-1113-4635-b30f-da26e04f6662\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.095186 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sz5r7" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.105689 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:30 crc kubenswrapper[4836]: E0217 14:24:30.150836 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" podUID="9ccd7ed5-2772-4482-af31-2578e98011fd" Feb 17 14:24:30 crc kubenswrapper[4836]: E0217 14:24:30.217269 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 17 14:24:30 crc kubenswrapper[4836]: E0217 14:24:30.217525 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-llzlm_openstack-operators(1bb12b86-1f25-4dd9-a44d-449a6deee701): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:30 crc kubenswrapper[4836]: E0217 14:24:30.219594 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" podUID="1bb12b86-1f25-4dd9-a44d-449a6deee701" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.725883 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.726005 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.731176 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-metrics-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.731996 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48-webhook-certs\") pod \"openstack-operator-controller-manager-667f54696f-kskgn\" (UID: \"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48\") " pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:30 crc kubenswrapper[4836]: I0217 14:24:30.865802 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:31 crc kubenswrapper[4836]: E0217 14:24:31.157152 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" podUID="1bb12b86-1f25-4dd9-a44d-449a6deee701" Feb 17 14:24:34 crc kubenswrapper[4836]: E0217 14:24:34.042810 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 17 14:24:34 crc kubenswrapper[4836]: E0217 14:24:34.043520 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j9gnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-ztvz2_openstack-operators(d4aa765a-0f56-4f05-b02f-f041841bc97d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:34 crc kubenswrapper[4836]: E0217 14:24:34.044783 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" podUID="d4aa765a-0f56-4f05-b02f-f041841bc97d" Feb 17 14:24:34 crc kubenswrapper[4836]: E0217 14:24:34.188508 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" podUID="d4aa765a-0f56-4f05-b02f-f041841bc97d" Feb 17 14:24:35 crc kubenswrapper[4836]: E0217 14:24:35.144648 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 17 14:24:35 crc kubenswrapper[4836]: E0217 14:24:35.145373 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6x2cc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-zkzrs_openstack-operators(7b9749c7-038f-4814-9357-623346c9172c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:35 crc kubenswrapper[4836]: E0217 14:24:35.146628 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" podUID="7b9749c7-038f-4814-9357-623346c9172c" Feb 17 14:24:35 crc kubenswrapper[4836]: E0217 14:24:35.194814 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" podUID="7b9749c7-038f-4814-9357-623346c9172c" Feb 17 14:24:35 crc kubenswrapper[4836]: E0217 14:24:35.755406 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 17 14:24:35 crc kubenswrapper[4836]: E0217 14:24:35.755628 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqpqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-mq76b_openstack-operators(f6ba6343-872d-4e36-accf-959bb437f82d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:35 crc kubenswrapper[4836]: E0217 14:24:35.757589 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" podUID="f6ba6343-872d-4e36-accf-959bb437f82d" Feb 17 14:24:36 crc kubenswrapper[4836]: E0217 14:24:36.201902 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" podUID="f6ba6343-872d-4e36-accf-959bb437f82d" Feb 17 14:24:36 crc kubenswrapper[4836]: E0217 14:24:36.301652 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 17 14:24:36 crc kubenswrapper[4836]: E0217 14:24:36.301917 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kv6mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-7ktgs_openstack-operators(d0c3c41c-ac60-40f0-bdfb-8fe641c9426a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:36 crc kubenswrapper[4836]: E0217 14:24:36.303092 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" podUID="d0c3c41c-ac60-40f0-bdfb-8fe641c9426a" Feb 17 14:24:36 crc kubenswrapper[4836]: E0217 14:24:36.844665 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 17 14:24:36 crc kubenswrapper[4836]: E0217 14:24:36.844968 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v292x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-b6cfm_openstack-operators(12cff299-e5ea-40a9-8a69-528c478cd0a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:36 crc kubenswrapper[4836]: E0217 14:24:36.847101 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" podUID="12cff299-e5ea-40a9-8a69-528c478cd0a0" Feb 17 14:24:37 crc kubenswrapper[4836]: E0217 14:24:37.210931 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" podUID="12cff299-e5ea-40a9-8a69-528c478cd0a0" Feb 17 14:24:37 crc kubenswrapper[4836]: E0217 14:24:37.212503 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" podUID="d0c3c41c-ac60-40f0-bdfb-8fe641c9426a" Feb 17 14:24:37 crc kubenswrapper[4836]: E0217 14:24:37.392700 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 17 14:24:37 crc kubenswrapper[4836]: E0217 14:24:37.392928 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mwb95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-6c4rn_openstack-operators(3d12b131-73a0-477e-ab9e-579309b0f5b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:37 crc kubenswrapper[4836]: E0217 14:24:37.395085 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" podUID="3d12b131-73a0-477e-ab9e-579309b0f5b1" Feb 17 14:24:38 crc kubenswrapper[4836]: E0217 14:24:38.074328 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 17 14:24:38 crc kubenswrapper[4836]: E0217 14:24:38.074507 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmrdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-bv7s8_openstack-operators(f2e6ac9f-ee72-4a28-b298-9b2f918d0c95): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:38 crc kubenswrapper[4836]: E0217 14:24:38.075970 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" podUID="f2e6ac9f-ee72-4a28-b298-9b2f918d0c95" Feb 17 14:24:38 crc kubenswrapper[4836]: E0217 14:24:38.228553 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" podUID="3d12b131-73a0-477e-ab9e-579309b0f5b1" Feb 17 14:24:38 crc kubenswrapper[4836]: E0217 14:24:38.228780 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" podUID="f2e6ac9f-ee72-4a28-b298-9b2f918d0c95" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.092949 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.093187 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nnnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-k9p46_openstack-operators(e805966b-ea22-4c2a-a6c4-3622300fcb2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.094384 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" podUID="e805966b-ea22-4c2a-a6c4-3622300fcb2f" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.242410 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" podUID="e805966b-ea22-4c2a-a6c4-3622300fcb2f" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.432167 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.432277 4836 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.432523 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mzgtd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d6964fcdb-rbq62_openstack-operators(a3c22d9b-6ba0-4dd2-861d-8685c18e9330): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.433797 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" podUID="a3c22d9b-6ba0-4dd2-861d-8685c18e9330" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.978385 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.978674 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-qnb5b_openstack-operators(18a63480-edc2-44ed-bd43-b7750f7f8f33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:39 crc kubenswrapper[4836]: E0217 14:24:39.980006 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" podUID="18a63480-edc2-44ed-bd43-b7750f7f8f33" Feb 17 14:24:40 crc kubenswrapper[4836]: E0217 14:24:40.250325 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" podUID="a3c22d9b-6ba0-4dd2-861d-8685c18e9330" Feb 17 14:24:40 crc kubenswrapper[4836]: E0217 14:24:40.250863 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" podUID="18a63480-edc2-44ed-bd43-b7750f7f8f33" Feb 17 14:24:40 crc kubenswrapper[4836]: E0217 14:24:40.624558 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 17 14:24:40 crc kubenswrapper[4836]: E0217 14:24:40.624782 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jxl4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-5hz7c_openstack-operators(52a90e1a-0e2d-4488-8a1a-34de15bfa3a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:24:40 crc kubenswrapper[4836]: E0217 14:24:40.626502 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" podUID="52a90e1a-0e2d-4488-8a1a-34de15bfa3a5" Feb 17 14:24:41 crc kubenswrapper[4836]: E0217 14:24:41.259386 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" podUID="52a90e1a-0e2d-4488-8a1a-34de15bfa3a5" Feb 17 14:24:43 crc kubenswrapper[4836]: I0217 14:24:43.244987 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp"] Feb 17 14:24:43 crc kubenswrapper[4836]: W0217 14:24:43.534614 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ae24b8_83c8_416d_9d39_24d84eb6cd83.slice/crio-29c49fea7918f4841548ce64de5334ffb851ee553c872044a1a7d3506146bcc1 WatchSource:0}: Error finding container 29c49fea7918f4841548ce64de5334ffb851ee553c872044a1a7d3506146bcc1: Status 404 returned error can't find the container with id 29c49fea7918f4841548ce64de5334ffb851ee553c872044a1a7d3506146bcc1 Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.076473 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht"] Feb 17 14:24:44 crc kubenswrapper[4836]: W0217 14:24:44.102799 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4affaaf4_1113_4635_b30f_da26e04f6662.slice/crio-207a583e5ef8f57f77f7026c8fc84b6f995ef483add4599c8abedcb41cbd7100 WatchSource:0}: Error finding container 207a583e5ef8f57f77f7026c8fc84b6f995ef483add4599c8abedcb41cbd7100: Status 404 returned error can't find the container with id 207a583e5ef8f57f77f7026c8fc84b6f995ef483add4599c8abedcb41cbd7100 Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.107156 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn"] Feb 17 14:24:44 crc kubenswrapper[4836]: W0217 14:24:44.123128 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca022a36_1c0e_4d3b_a6cf_87f4a78cfd48.slice/crio-b9a51bec82daee147a4b5b4b6929361e333f732f6fd29c9819a6f2fbbc2af054 WatchSource:0}: Error finding container b9a51bec82daee147a4b5b4b6929361e333f732f6fd29c9819a6f2fbbc2af054: Status 404 returned error can't find the container with id b9a51bec82daee147a4b5b4b6929361e333f732f6fd29c9819a6f2fbbc2af054 Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.309618 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" event={"ID":"cf7c4631-b19a-4160-8581-15f72869a60b","Type":"ContainerStarted","Data":"958fcda9023106cd43167a56dddecd3ccee7273b15bfa8412cad588e8c2edb03"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.310833 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.329943 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" event={"ID":"a7c6acc7-4243-4c0d-a723-e83dc2e054df","Type":"ContainerStarted","Data":"e56083496734d6d4073a7b0c7e0e5be4b8d8f4db663d854e080d73b1ec0786d7"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.330182 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.350535 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" event={"ID":"0962ca43-43c4-4884-bd8e-889835f83632","Type":"ContainerStarted","Data":"e63affa4126c2053cd36b1f5738f4d983b8d1752706f2e3a5c6b3007ff77a087"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.351366 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.362482 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" event={"ID":"c3d9def3-7f53-4acc-9c46-d37ddf41e3b7","Type":"ContainerStarted","Data":"06f9ef1d7d3b07ef46637391a35b2eeaa0cb72f854cc7159a4aed561e3024636"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.363225 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.376477 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" podStartSLOduration=3.599888231 podStartE2EDuration="31.376395907s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.947564649 +0000 UTC m=+1082.290492918" lastFinishedPulling="2026-02-17 14:24:43.724072325 +0000 UTC m=+1110.067000594" observedRunningTime="2026-02-17 14:24:44.3650595 +0000 UTC m=+1110.707987779" watchObservedRunningTime="2026-02-17 14:24:44.376395907 +0000 UTC m=+1110.719324176" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.390673 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" event={"ID":"ce77a6a5-95bb-4758-8a38-cdc354fd9d6c","Type":"ContainerStarted","Data":"3f2229ea30a416d447515977dca499b1a4097965d6864eebeb909c756f1b55e5"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.391399 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.396539 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" event={"ID":"1f238b1a-4c0c-45de-bb7a-12946f426b89","Type":"ContainerStarted","Data":"2476fcf1b1ba9365ac703f9c160d00016da1128afa99f6bc0d6399a11c9a9f48"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.396751 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.408004 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" event={"ID":"d423f7ba-2751-4d99-8102-3bc52b302161","Type":"ContainerStarted","Data":"4a6e1d7cc23236717b9d91223594c2175b6cb0c66ca4b09b080158d4c7387cd6"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.409729 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" podStartSLOduration=5.700681604 podStartE2EDuration="31.40970954s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:14.894788921 +0000 UTC m=+1081.237717190" lastFinishedPulling="2026-02-17 14:24:40.603816857 +0000 UTC m=+1106.946745126" observedRunningTime="2026-02-17 14:24:44.397645083 +0000 UTC m=+1110.740573352" watchObservedRunningTime="2026-02-17 14:24:44.40970954 +0000 UTC m=+1110.752637809" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.422746 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" event={"ID":"9ccd7ed5-2772-4482-af31-2578e98011fd","Type":"ContainerStarted","Data":"2b435b6c7424118be4215baf54140a03ec61746af41ee081413eb2479f1896c7"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.423905 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.435096 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" event={"ID":"a1ae24b8-83c8-416d-9d39-24d84eb6cd83","Type":"ContainerStarted","Data":"29c49fea7918f4841548ce64de5334ffb851ee553c872044a1a7d3506146bcc1"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.452346 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" event={"ID":"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48","Type":"ContainerStarted","Data":"b9a51bec82daee147a4b5b4b6929361e333f732f6fd29c9819a6f2fbbc2af054"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.455482 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" podStartSLOduration=6.640750083 podStartE2EDuration="32.455459941s" podCreationTimestamp="2026-02-17 14:24:12 +0000 UTC" firstStartedPulling="2026-02-17 14:24:14.786370435 +0000 UTC m=+1081.129298704" lastFinishedPulling="2026-02-17 14:24:40.601080293 +0000 UTC m=+1106.944008562" observedRunningTime="2026-02-17 14:24:44.442940761 +0000 UTC m=+1110.785869040" watchObservedRunningTime="2026-02-17 14:24:44.455459941 +0000 UTC m=+1110.798388210" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.474154 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" event={"ID":"4affaaf4-1113-4635-b30f-da26e04f6662","Type":"ContainerStarted","Data":"207a583e5ef8f57f77f7026c8fc84b6f995ef483add4599c8abedcb41cbd7100"} Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.487051 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" podStartSLOduration=5.962083731 podStartE2EDuration="31.487012986s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.078136832 +0000 UTC m=+1081.421065101" lastFinishedPulling="2026-02-17 14:24:40.603066087 +0000 UTC m=+1106.945994356" observedRunningTime="2026-02-17 14:24:44.472838021 +0000 UTC m=+1110.815766310" watchObservedRunningTime="2026-02-17 14:24:44.487012986 +0000 UTC m=+1110.829941255" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.533933 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" podStartSLOduration=6.794177276 podStartE2EDuration="31.533909867s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.863993854 +0000 UTC m=+1082.206922123" lastFinishedPulling="2026-02-17 14:24:40.603726445 +0000 UTC m=+1106.946654714" observedRunningTime="2026-02-17 14:24:44.529759965 +0000 UTC m=+1110.872688244" watchObservedRunningTime="2026-02-17 14:24:44.533909867 +0000 UTC m=+1110.876838136" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.607765 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" podStartSLOduration=2.973091447 podStartE2EDuration="31.607738239s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.089423223 +0000 UTC m=+1081.432351492" lastFinishedPulling="2026-02-17 14:24:43.724070015 +0000 UTC m=+1110.066998284" observedRunningTime="2026-02-17 14:24:44.605731704 +0000 UTC m=+1110.948659973" watchObservedRunningTime="2026-02-17 14:24:44.607738239 +0000 UTC m=+1110.950666508" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.611692 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" podStartSLOduration=6.099904681 podStartE2EDuration="31.611680225s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.090741788 +0000 UTC m=+1081.433670047" lastFinishedPulling="2026-02-17 14:24:40.602517322 +0000 UTC m=+1106.945445591" observedRunningTime="2026-02-17 14:24:44.556854759 +0000 UTC m=+1110.899783028" watchObservedRunningTime="2026-02-17 14:24:44.611680225 +0000 UTC m=+1110.954608514" Feb 17 14:24:44 crc kubenswrapper[4836]: I0217 14:24:44.638475 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4dds" podStartSLOduration=3.128180479 podStartE2EDuration="30.638458511s" podCreationTimestamp="2026-02-17 14:24:14 +0000 UTC" firstStartedPulling="2026-02-17 14:24:16.22139024 +0000 UTC m=+1082.564318509" lastFinishedPulling="2026-02-17 14:24:43.731668272 +0000 UTC m=+1110.074596541" observedRunningTime="2026-02-17 14:24:44.632779157 +0000 UTC m=+1110.975707446" watchObservedRunningTime="2026-02-17 14:24:44.638458511 +0000 UTC m=+1110.981386780" Feb 17 14:24:45 crc kubenswrapper[4836]: I0217 14:24:45.490983 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" event={"ID":"ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48","Type":"ContainerStarted","Data":"708a2fe9e35db4f83b40aa4c7322845835b651a153295351abeb42dbbcd2edd8"} Feb 17 14:24:45 crc kubenswrapper[4836]: I0217 14:24:45.491456 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:45 crc kubenswrapper[4836]: I0217 14:24:45.499247 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" event={"ID":"1bb12b86-1f25-4dd9-a44d-449a6deee701","Type":"ContainerStarted","Data":"cf32f810bc332f8efcb400d9b1dcbe8ca81ac1a06416a46b64f17eae54b3e1db"} Feb 17 14:24:45 crc kubenswrapper[4836]: I0217 14:24:45.615861 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" podStartSLOduration=31.615831814 podStartE2EDuration="31.615831814s" podCreationTimestamp="2026-02-17 14:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:24:45.570264089 +0000 UTC m=+1111.913192358" watchObservedRunningTime="2026-02-17 14:24:45.615831814 +0000 UTC m=+1111.958760083" Feb 17 14:24:45 crc kubenswrapper[4836]: I0217 14:24:45.673824 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" podStartSLOduration=4.218669397 podStartE2EDuration="32.673792285s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.775226671 +0000 UTC m=+1082.118154940" lastFinishedPulling="2026-02-17 14:24:44.230349559 +0000 UTC m=+1110.573277828" observedRunningTime="2026-02-17 14:24:45.618517306 +0000 UTC m=+1111.961445605" watchObservedRunningTime="2026-02-17 14:24:45.673792285 +0000 UTC m=+1112.016720574" Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.615680 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" event={"ID":"d4aa765a-0f56-4f05-b02f-f041841bc97d","Type":"ContainerStarted","Data":"5191cedafd66ba2cd22d023c2056361c8e2acf7c2b599367e99e74082824f087"} Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.616811 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.856721 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" event={"ID":"f6ba6343-872d-4e36-accf-959bb437f82d","Type":"ContainerStarted","Data":"e24c63fa0f774e9fa7bb7ef037058452c11973342b3b28780f09ed7f801ebff1"} Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.857274 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.868336 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" event={"ID":"7b9749c7-038f-4814-9357-623346c9172c","Type":"ContainerStarted","Data":"be7ade67079dc5021195621f82d2860a5ef1f09ba1cff570b6f8a7a6c21d0ab9"} Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.868932 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.870964 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" event={"ID":"a1ae24b8-83c8-416d-9d39-24d84eb6cd83","Type":"ContainerStarted","Data":"3282111097c5b5c1e0bb4f2354474b0578f6c05ab2c972060fe1a6428e670589"} Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.871168 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.880141 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" event={"ID":"d0c3c41c-ac60-40f0-bdfb-8fe641c9426a","Type":"ContainerStarted","Data":"9768fb7add75b5b01fc42c92fdcccdabd19bed5b5c0fb476beebce8f4b53ded5"} Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.882718 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.885091 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" event={"ID":"4affaaf4-1113-4635-b30f-da26e04f6662","Type":"ContainerStarted","Data":"c6739dc27b24b8b995bb3d2605c8cb7044479d89652a36941d38b08687be632f"} Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.885367 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:24:50 crc kubenswrapper[4836]: I0217 14:24:50.921239 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-667f54696f-kskgn" Feb 17 14:24:51 crc kubenswrapper[4836]: I0217 14:24:51.661144 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" podStartSLOduration=5.192983177 podStartE2EDuration="38.661121159s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:16.208330931 +0000 UTC m=+1082.551259200" lastFinishedPulling="2026-02-17 14:24:49.676468913 +0000 UTC m=+1116.019397182" observedRunningTime="2026-02-17 14:24:50.999408443 +0000 UTC m=+1117.342336722" watchObservedRunningTime="2026-02-17 14:24:51.661121159 +0000 UTC m=+1118.004049438" Feb 17 14:24:51 crc kubenswrapper[4836]: I0217 14:24:51.713572 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" podStartSLOduration=4.765050083 podStartE2EDuration="38.71353428s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.724698216 +0000 UTC m=+1082.067626485" lastFinishedPulling="2026-02-17 14:24:49.673182413 +0000 UTC m=+1116.016110682" observedRunningTime="2026-02-17 14:24:51.677607956 +0000 UTC m=+1118.020536225" watchObservedRunningTime="2026-02-17 14:24:51.71353428 +0000 UTC m=+1118.056462549" Feb 17 14:24:51 crc kubenswrapper[4836]: I0217 14:24:51.715643 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" podStartSLOduration=5.176168587 podStartE2EDuration="38.715630267s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.825379406 +0000 UTC m=+1082.168307675" lastFinishedPulling="2026-02-17 14:24:49.364841086 +0000 UTC m=+1115.707769355" observedRunningTime="2026-02-17 14:24:51.708670168 +0000 UTC m=+1118.051598447" watchObservedRunningTime="2026-02-17 14:24:51.715630267 +0000 UTC m=+1118.058558536" Feb 17 14:24:51 crc kubenswrapper[4836]: I0217 14:24:51.746042 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" podStartSLOduration=4.75159594 podStartE2EDuration="38.74601001s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:16.030465806 +0000 UTC m=+1082.373394075" lastFinishedPulling="2026-02-17 14:24:50.024879876 +0000 UTC m=+1116.367808145" observedRunningTime="2026-02-17 14:24:51.741182849 +0000 UTC m=+1118.084111118" watchObservedRunningTime="2026-02-17 14:24:51.74601001 +0000 UTC m=+1118.088938279" Feb 17 14:24:51 crc kubenswrapper[4836]: I0217 14:24:51.786669 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" podStartSLOduration=33.539618684 podStartE2EDuration="38.786633792s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:44.115896066 +0000 UTC m=+1110.458824335" lastFinishedPulling="2026-02-17 14:24:49.362911174 +0000 UTC m=+1115.705839443" observedRunningTime="2026-02-17 14:24:51.780022242 +0000 UTC m=+1118.122950531" watchObservedRunningTime="2026-02-17 14:24:51.786633792 +0000 UTC m=+1118.129562071" Feb 17 14:24:51 crc kubenswrapper[4836]: I0217 14:24:51.944880 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" podStartSLOduration=32.835566069 podStartE2EDuration="38.94485598s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:43.539278736 +0000 UTC m=+1109.882207005" lastFinishedPulling="2026-02-17 14:24:49.648568657 +0000 UTC m=+1115.991496916" observedRunningTime="2026-02-17 14:24:51.941472738 +0000 UTC m=+1118.284401037" watchObservedRunningTime="2026-02-17 14:24:51.94485598 +0000 UTC m=+1118.287784249" Feb 17 14:24:53 crc kubenswrapper[4836]: I0217 14:24:53.395360 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-54696" Feb 17 14:24:53 crc kubenswrapper[4836]: I0217 14:24:53.801174 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zxb25" Feb 17 14:24:53 crc kubenswrapper[4836]: I0217 14:24:53.801654 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8wdwr" Feb 17 14:24:53 crc kubenswrapper[4836]: I0217 14:24:53.815151 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-7vwdd" Feb 17 14:24:53 crc kubenswrapper[4836]: I0217 14:24:53.996257 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-6lzts" Feb 17 14:24:54 crc kubenswrapper[4836]: I0217 14:24:54.331813 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" Feb 17 14:24:54 crc kubenswrapper[4836]: I0217 14:24:54.396189 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-llzlm" Feb 17 14:24:54 crc kubenswrapper[4836]: I0217 14:24:54.413879 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-zkzrs" Feb 17 14:24:54 crc kubenswrapper[4836]: I0217 14:24:54.542915 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jnxzt" Feb 17 14:24:54 crc kubenswrapper[4836]: I0217 14:24:54.829211 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-ztvz2" Feb 17 14:24:54 crc kubenswrapper[4836]: I0217 14:24:54.831222 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-lmtng" Feb 17 14:24:55 crc kubenswrapper[4836]: I0217 14:24:55.278414 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mq76b" Feb 17 14:24:59 crc kubenswrapper[4836]: I0217 14:24:59.760918 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-f4fvp" Feb 17 14:25:00 crc kubenswrapper[4836]: I0217 14:25:00.116204 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.371179 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" event={"ID":"e805966b-ea22-4c2a-a6c4-3622300fcb2f","Type":"ContainerStarted","Data":"2321f59df1f41fd133675fc8ee34eba7282d0495b1c7daa28ec9b46cd02156b1"} Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.378320 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.381058 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" event={"ID":"3d12b131-73a0-477e-ab9e-579309b0f5b1","Type":"ContainerStarted","Data":"807688d0e1d6c570f4713555266e9275668dfb9982995d91bdd84c6cbaf4e0d0"} Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.382911 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.389473 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" event={"ID":"12cff299-e5ea-40a9-8a69-528c478cd0a0","Type":"ContainerStarted","Data":"f8d70c0e3d96a6dfb60dfb8639e0e5011472512bc988c98b13b487c2828e8971"} Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.395623 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.421169 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" event={"ID":"18a63480-edc2-44ed-bd43-b7750f7f8f33","Type":"ContainerStarted","Data":"7f2b3c2f7eec73e27bcb4cd5915c8293e58fe18df35b97da46fbb0d6fd1af60e"} Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.423870 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.453968 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" event={"ID":"f2e6ac9f-ee72-4a28-b298-9b2f918d0c95","Type":"ContainerStarted","Data":"8ce1dabe9924cb8b28bf979f68fdd93d0f836e1196ca4a2fd80cc8339a67b9a6"} Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.455536 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" podStartSLOduration=3.987661765 podStartE2EDuration="50.455489s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.717349071 +0000 UTC m=+1082.060277340" lastFinishedPulling="2026-02-17 14:25:02.185176306 +0000 UTC m=+1128.528104575" observedRunningTime="2026-02-17 14:25:03.433309848 +0000 UTC m=+1129.776238127" watchObservedRunningTime="2026-02-17 14:25:03.455489 +0000 UTC m=+1129.798417289" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.458959 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.462990 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" event={"ID":"a3c22d9b-6ba0-4dd2-861d-8685c18e9330","Type":"ContainerStarted","Data":"cac0671ac8dbcb53aec8149b0b645141f6585af883bb10cdeb19994be97350ba"} Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.465897 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.507239 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" podStartSLOduration=3.661467534 podStartE2EDuration="50.507184731s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.339814839 +0000 UTC m=+1081.682743098" lastFinishedPulling="2026-02-17 14:25:02.185532026 +0000 UTC m=+1128.528460295" observedRunningTime="2026-02-17 14:25:03.491467084 +0000 UTC m=+1129.834395373" watchObservedRunningTime="2026-02-17 14:25:03.507184731 +0000 UTC m=+1129.850113010" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.572098 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" podStartSLOduration=4.24988045 podStartE2EDuration="50.572028948s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.863998604 +0000 UTC m=+1082.206926873" lastFinishedPulling="2026-02-17 14:25:02.186147102 +0000 UTC m=+1128.529075371" observedRunningTime="2026-02-17 14:25:03.548971214 +0000 UTC m=+1129.891899503" watchObservedRunningTime="2026-02-17 14:25:03.572028948 +0000 UTC m=+1129.914957217" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.637369 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" podStartSLOduration=4.023844526 podStartE2EDuration="51.637328519s" podCreationTimestamp="2026-02-17 14:24:12 +0000 UTC" firstStartedPulling="2026-02-17 14:24:14.58485843 +0000 UTC m=+1080.927786699" lastFinishedPulling="2026-02-17 14:25:02.198342423 +0000 UTC m=+1128.541270692" observedRunningTime="2026-02-17 14:25:03.631463729 +0000 UTC m=+1129.974391998" watchObservedRunningTime="2026-02-17 14:25:03.637328519 +0000 UTC m=+1129.980256818" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.683003 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" podStartSLOduration=4.349291914 podStartE2EDuration="50.682978485s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.864322703 +0000 UTC m=+1082.207250982" lastFinishedPulling="2026-02-17 14:25:02.198009274 +0000 UTC m=+1128.540937553" observedRunningTime="2026-02-17 14:25:03.675957705 +0000 UTC m=+1130.018885984" watchObservedRunningTime="2026-02-17 14:25:03.682978485 +0000 UTC m=+1130.025906754" Feb 17 14:25:03 crc kubenswrapper[4836]: I0217 14:25:03.718536 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" podStartSLOduration=3.534614384 podStartE2EDuration="50.718491079s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.013775689 +0000 UTC m=+1081.356703958" lastFinishedPulling="2026-02-17 14:25:02.197652384 +0000 UTC m=+1128.540580653" observedRunningTime="2026-02-17 14:25:03.710194014 +0000 UTC m=+1130.053122283" watchObservedRunningTime="2026-02-17 14:25:03.718491079 +0000 UTC m=+1130.061419348" Feb 17 14:25:04 crc kubenswrapper[4836]: I0217 14:25:04.477084 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" event={"ID":"52a90e1a-0e2d-4488-8a1a-34de15bfa3a5","Type":"ContainerStarted","Data":"f829eaa4f03e253de4cc0fe49720d25547d1eaa733834ad46123eace2cc39e92"} Feb 17 14:25:04 crc kubenswrapper[4836]: I0217 14:25:04.548579 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" podStartSLOduration=5.097211322 podStartE2EDuration="51.548545208s" podCreationTimestamp="2026-02-17 14:24:13 +0000 UTC" firstStartedPulling="2026-02-17 14:24:15.814518677 +0000 UTC m=+1082.157446946" lastFinishedPulling="2026-02-17 14:25:02.265852563 +0000 UTC m=+1128.608780832" observedRunningTime="2026-02-17 14:25:04.544572411 +0000 UTC m=+1130.887500690" watchObservedRunningTime="2026-02-17 14:25:04.548545208 +0000 UTC m=+1130.891473487" Feb 17 14:25:04 crc kubenswrapper[4836]: I0217 14:25:04.625846 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-7ktgs" Feb 17 14:25:13 crc kubenswrapper[4836]: I0217 14:25:13.367070 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b6cfm" Feb 17 14:25:13 crc kubenswrapper[4836]: I0217 14:25:13.631810 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bv7s8" Feb 17 14:25:14 crc kubenswrapper[4836]: I0217 14:25:14.192796 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6c4rn" Feb 17 14:25:14 crc kubenswrapper[4836]: I0217 14:25:14.235241 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-k9p46" Feb 17 14:25:14 crc kubenswrapper[4836]: I0217 14:25:14.259192 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" Feb 17 14:25:14 crc kubenswrapper[4836]: I0217 14:25:14.263277 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-5hz7c" Feb 17 14:25:14 crc kubenswrapper[4836]: I0217 14:25:14.268228 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qnb5b" Feb 17 14:25:14 crc kubenswrapper[4836]: I0217 14:25:14.633174 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d6964fcdb-rbq62" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.790905 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-46wms"] Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.794057 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.799951 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.800286 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-vkr8h" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.800557 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.809274 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.823265 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-46wms"] Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.843831 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmnr\" (UniqueName: \"kubernetes.io/projected/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-kube-api-access-8qmnr\") pod \"dnsmasq-dns-675f4bcbfc-46wms\" (UID: \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.844452 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-config\") pod \"dnsmasq-dns-675f4bcbfc-46wms\" (UID: \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.934198 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ggz9w"] Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.936775 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.946547 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvxj4\" (UniqueName: \"kubernetes.io/projected/24a665ea-1793-426d-b4df-48bfdd048f1c-kube-api-access-pvxj4\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.946661 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qmnr\" (UniqueName: \"kubernetes.io/projected/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-kube-api-access-8qmnr\") pod \"dnsmasq-dns-675f4bcbfc-46wms\" (UID: \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.946737 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.946829 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-config\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.946874 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-config\") pod \"dnsmasq-dns-675f4bcbfc-46wms\" (UID: \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.949377 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-config\") pod \"dnsmasq-dns-675f4bcbfc-46wms\" (UID: \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.963018 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 14:25:33 crc kubenswrapper[4836]: I0217 14:25:33.963238 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ggz9w"] Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.002174 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qmnr\" (UniqueName: \"kubernetes.io/projected/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-kube-api-access-8qmnr\") pod \"dnsmasq-dns-675f4bcbfc-46wms\" (UID: \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.056897 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvxj4\" (UniqueName: \"kubernetes.io/projected/24a665ea-1793-426d-b4df-48bfdd048f1c-kube-api-access-pvxj4\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.163642 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.165256 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.167952 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.168168 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-config\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.172048 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-config\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.205635 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvxj4\" (UniqueName: \"kubernetes.io/projected/24a665ea-1793-426d-b4df-48bfdd048f1c-kube-api-access-pvxj4\") pod \"dnsmasq-dns-78dd6ddcc-ggz9w\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.262737 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.848035 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-46wms"] Feb 17 14:25:34 crc kubenswrapper[4836]: I0217 14:25:34.944808 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ggz9w"] Feb 17 14:25:34 crc kubenswrapper[4836]: W0217 14:25:34.953174 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a665ea_1793_426d_b4df_48bfdd048f1c.slice/crio-a6bb86072d0798a1afb87a4b547d5cc9e6e5bb7f7f723d97aa4e7592e494e55c WatchSource:0}: Error finding container a6bb86072d0798a1afb87a4b547d5cc9e6e5bb7f7f723d97aa4e7592e494e55c: Status 404 returned error can't find the container with id a6bb86072d0798a1afb87a4b547d5cc9e6e5bb7f7f723d97aa4e7592e494e55c Feb 17 14:25:35 crc kubenswrapper[4836]: I0217 14:25:35.187682 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" event={"ID":"24a665ea-1793-426d-b4df-48bfdd048f1c","Type":"ContainerStarted","Data":"a6bb86072d0798a1afb87a4b547d5cc9e6e5bb7f7f723d97aa4e7592e494e55c"} Feb 17 14:25:35 crc kubenswrapper[4836]: I0217 14:25:35.189881 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" event={"ID":"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b","Type":"ContainerStarted","Data":"b0dca2b7fd572359a51505f1dec3dc3d3db3e7f58bf21d6f39749cf427d85d3b"} Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.355145 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-46wms"] Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.397246 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jbcz5"] Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.403973 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.439080 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jbcz5"] Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.518438 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-config\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.518604 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.518651 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zckh\" (UniqueName: \"kubernetes.io/projected/e14b6d2f-85ef-4f0c-8a81-426aee02b456-kube-api-access-4zckh\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.623204 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.623322 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zckh\" (UniqueName: \"kubernetes.io/projected/e14b6d2f-85ef-4f0c-8a81-426aee02b456-kube-api-access-4zckh\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.623426 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-config\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.692998 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-config\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.697109 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.737207 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zckh\" (UniqueName: \"kubernetes.io/projected/e14b6d2f-85ef-4f0c-8a81-426aee02b456-kube-api-access-4zckh\") pod \"dnsmasq-dns-666b6646f7-jbcz5\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.800973 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.906734 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ggz9w"] Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.961442 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s6vqb"] Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.963678 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:36 crc kubenswrapper[4836]: I0217 14:25:36.980556 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s6vqb"] Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.030794 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-config\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.030929 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ntrv\" (UniqueName: \"kubernetes.io/projected/63d320ce-8669-4285-b4bc-dbb6eeb9a190-kube-api-access-8ntrv\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.031032 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.133281 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ntrv\" (UniqueName: \"kubernetes.io/projected/63d320ce-8669-4285-b4bc-dbb6eeb9a190-kube-api-access-8ntrv\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.134956 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.136531 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.136683 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-config\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.137593 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-config\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.161829 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ntrv\" (UniqueName: \"kubernetes.io/projected/63d320ce-8669-4285-b4bc-dbb6eeb9a190-kube-api-access-8ntrv\") pod \"dnsmasq-dns-57d769cc4f-s6vqb\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.300056 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jbcz5"] Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.305013 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.570621 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.586164 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.586349 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.592281 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.592393 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.592518 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.592702 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.592796 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.592911 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xcfhz" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.592992 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.859917 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.861625 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.861670 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.861849 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec9408e6-0474-4f84-842e-b1c20f42a7b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.862011 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.862036 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d8603b9-3f26-4e03-8164-a0930fb9429c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d8603b9-3f26-4e03-8164-a0930fb9429c\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.862066 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec9408e6-0474-4f84-842e-b1c20f42a7b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.862142 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfz2p\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-kube-api-access-xfz2p\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.862174 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.862294 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.862394 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964398 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964471 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964555 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964586 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964607 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964645 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec9408e6-0474-4f84-842e-b1c20f42a7b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964688 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d8603b9-3f26-4e03-8164-a0930fb9429c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d8603b9-3f26-4e03-8164-a0930fb9429c\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964714 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964745 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec9408e6-0474-4f84-842e-b1c20f42a7b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964773 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfz2p\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-kube-api-access-xfz2p\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.964797 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.965441 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.966265 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.971027 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec9408e6-0474-4f84-842e-b1c20f42a7b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.971377 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.971537 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.971637 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d8603b9-3f26-4e03-8164-a0930fb9429c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d8603b9-3f26-4e03-8164-a0930fb9429c\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03c0dd7af3740fd4ae1135362211cc7ed6efb2bcdea721aed8377f0d38bda50d/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.971653 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec9408e6-0474-4f84-842e-b1c20f42a7b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.976892 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.977779 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.978855 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec9408e6-0474-4f84-842e-b1c20f42a7b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.989823 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:37 crc kubenswrapper[4836]: I0217 14:25:37.996065 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfz2p\" (UniqueName: \"kubernetes.io/projected/ec9408e6-0474-4f84-842e-b1c20f42a7b8-kube-api-access-xfz2p\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.032808 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d8603b9-3f26-4e03-8164-a0930fb9429c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d8603b9-3f26-4e03-8164-a0930fb9429c\") pod \"rabbitmq-server-0\" (UID: \"ec9408e6-0474-4f84-842e-b1c20f42a7b8\") " pod="openstack/rabbitmq-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.071215 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s6vqb"] Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.072063 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.085645 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.087898 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.092757 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-q7f7v" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.093126 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.093269 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.093855 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.095282 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.095747 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.095971 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 14:25:38 crc kubenswrapper[4836]: W0217 14:25:38.118377 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63d320ce_8669_4285_b4bc_dbb6eeb9a190.slice/crio-0ccc818ba3aecccefe49bbab270ac8d64079fabeee863e8305c62599ebffa6de WatchSource:0}: Error finding container 0ccc818ba3aecccefe49bbab270ac8d64079fabeee863e8305c62599ebffa6de: Status 404 returned error can't find the container with id 0ccc818ba3aecccefe49bbab270ac8d64079fabeee863e8305c62599ebffa6de Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.139111 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.260080 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" event={"ID":"e14b6d2f-85ef-4f0c-8a81-426aee02b456","Type":"ContainerStarted","Data":"4be2faa5279826c8447da22307f09f3ad1d1675b115d7c7c5cab72070952c1fe"} Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270060 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270117 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270191 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270214 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270253 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270326 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f866bb7-5209-4275-8884-df6f074b3f7c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270353 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f866bb7-5209-4275-8884-df6f074b3f7c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270377 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270400 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270447 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07fb2529-04ae-48ee-a7cb-9474d02cf39f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07fb2529-04ae-48ee-a7cb-9474d02cf39f\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.270477 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9h24\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-kube-api-access-t9h24\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.272610 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" event={"ID":"63d320ce-8669-4285-b4bc-dbb6eeb9a190","Type":"ContainerStarted","Data":"0ccc818ba3aecccefe49bbab270ac8d64079fabeee863e8305c62599ebffa6de"} Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.371900 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-07fb2529-04ae-48ee-a7cb-9474d02cf39f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07fb2529-04ae-48ee-a7cb-9474d02cf39f\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372383 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9h24\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-kube-api-access-t9h24\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372420 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372435 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372484 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372501 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372536 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372569 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f866bb7-5209-4275-8884-df6f074b3f7c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372587 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f866bb7-5209-4275-8884-df6f074b3f7c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372614 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.372642 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.373993 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.377773 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.377814 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.378256 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.379321 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f866bb7-5209-4275-8884-df6f074b3f7c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.383927 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f866bb7-5209-4275-8884-df6f074b3f7c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.390055 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f866bb7-5209-4275-8884-df6f074b3f7c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.390271 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.390900 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.390972 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-07fb2529-04ae-48ee-a7cb-9474d02cf39f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07fb2529-04ae-48ee-a7cb-9474d02cf39f\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8a4c70e8190ff4e8e3819dded9c01c5615ea2f38f06b8e31f9d4a795c0f880b/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.402744 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9h24\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-kube-api-access-t9h24\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.403608 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f866bb7-5209-4275-8884-df6f074b3f7c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.435183 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-07fb2529-04ae-48ee-a7cb-9474d02cf39f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07fb2529-04ae-48ee-a7cb-9474d02cf39f\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f866bb7-5209-4275-8884-df6f074b3f7c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.447087 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:25:38 crc kubenswrapper[4836]: I0217 14:25:38.596851 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 14:25:38 crc kubenswrapper[4836]: W0217 14:25:38.619356 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9408e6_0474_4f84_842e_b1c20f42a7b8.slice/crio-b0fa4d163a114845da5261a3895974dd34d3a05172b60f7f2dca90e0c423de30 WatchSource:0}: Error finding container b0fa4d163a114845da5261a3895974dd34d3a05172b60f7f2dca90e0c423de30: Status 404 returned error can't find the container with id b0fa4d163a114845da5261a3895974dd34d3a05172b60f7f2dca90e0c423de30 Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.166275 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 14:25:39 crc kubenswrapper[4836]: W0217 14:25:39.214794 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f866bb7_5209_4275_8884_df6f074b3f7c.slice/crio-aa9448becf8224adce1ecee542747964e3dcfc59ddd21273b79be9dd9f859c35 WatchSource:0}: Error finding container aa9448becf8224adce1ecee542747964e3dcfc59ddd21273b79be9dd9f859c35: Status 404 returned error can't find the container with id aa9448becf8224adce1ecee542747964e3dcfc59ddd21273b79be9dd9f859c35 Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.280133 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.281990 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.285900 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xwjkv" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.286371 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.286495 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.286676 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.293087 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.299943 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.303368 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec9408e6-0474-4f84-842e-b1c20f42a7b8","Type":"ContainerStarted","Data":"b0fa4d163a114845da5261a3895974dd34d3a05172b60f7f2dca90e0c423de30"} Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.314069 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6f866bb7-5209-4275-8884-df6f074b3f7c","Type":"ContainerStarted","Data":"aa9448becf8224adce1ecee542747964e3dcfc59ddd21273b79be9dd9f859c35"} Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.397338 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wccv\" (UniqueName: \"kubernetes.io/projected/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-kube-api-access-8wccv\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.397426 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0b71cc85-2788-4798-ad47-2c45d9c63e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b71cc85-2788-4798-ad47-2c45d9c63e69\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.397462 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-kolla-config\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.397498 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.397553 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.398521 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.398643 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.398942 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-config-data-default\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.501242 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.501331 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.501365 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.501420 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-config-data-default\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.501470 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wccv\" (UniqueName: \"kubernetes.io/projected/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-kube-api-access-8wccv\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.501510 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0b71cc85-2788-4798-ad47-2c45d9c63e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b71cc85-2788-4798-ad47-2c45d9c63e69\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.501536 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-kolla-config\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.501571 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.504850 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.505494 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-kolla-config\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.506571 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.507365 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-config-data-default\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.515943 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.516002 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0b71cc85-2788-4798-ad47-2c45d9c63e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b71cc85-2788-4798-ad47-2c45d9c63e69\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/afb4cc9a14bf9ee01a267a35faf227427838c3a04bd3afa8c77910fa5827f2c9/globalmount\"" pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.529668 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.532559 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wccv\" (UniqueName: \"kubernetes.io/projected/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-kube-api-access-8wccv\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.532567 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd891e0-6f97-4fa3-8281-aa97232d6c6d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.573551 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0b71cc85-2788-4798-ad47-2c45d9c63e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b71cc85-2788-4798-ad47-2c45d9c63e69\") pod \"openstack-galera-0\" (UID: \"2fd891e0-6f97-4fa3-8281-aa97232d6c6d\") " pod="openstack/openstack-galera-0" Feb 17 14:25:39 crc kubenswrapper[4836]: I0217 14:25:39.620023 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.231318 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 14:25:40 crc kubenswrapper[4836]: W0217 14:25:40.336878 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fd891e0_6f97_4fa3_8281_aa97232d6c6d.slice/crio-b7ccab72d6bfe7a32d290bc2bd21c7153f92a355a14103904bd51d275095d6c6 WatchSource:0}: Error finding container b7ccab72d6bfe7a32d290bc2bd21c7153f92a355a14103904bd51d275095d6c6: Status 404 returned error can't find the container with id b7ccab72d6bfe7a32d290bc2bd21c7153f92a355a14103904bd51d275095d6c6 Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.753652 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.765209 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.774207 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-br528" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.774622 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.775159 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.796868 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.808490 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.827675 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.827735 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d533d4c9-53ad-455f-9db7-827245c43d24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d533d4c9-53ad-455f-9db7-827245c43d24\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.827758 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.827787 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6016745-1634-4eb6-afee-b98ce9ab8f56-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.827841 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6016745-1634-4eb6-afee-b98ce9ab8f56-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.830183 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6785l\" (UniqueName: \"kubernetes.io/projected/a6016745-1634-4eb6-afee-b98ce9ab8f56-kube-api-access-6785l\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.830240 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6016745-1634-4eb6-afee-b98ce9ab8f56-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.830596 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.910752 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.913713 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.922544 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zckhv" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.922934 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.923116 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.934223 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6016745-1634-4eb6-afee-b98ce9ab8f56-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.934329 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6785l\" (UniqueName: \"kubernetes.io/projected/a6016745-1634-4eb6-afee-b98ce9ab8f56-kube-api-access-6785l\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.934367 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6016745-1634-4eb6-afee-b98ce9ab8f56-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.934409 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.934463 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.934493 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d533d4c9-53ad-455f-9db7-827245c43d24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d533d4c9-53ad-455f-9db7-827245c43d24\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.934513 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.934539 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6016745-1634-4eb6-afee-b98ce9ab8f56-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.935057 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6016745-1634-4eb6-afee-b98ce9ab8f56-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.940213 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.942581 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.942637 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6016745-1634-4eb6-afee-b98ce9ab8f56-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.950642 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.950697 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d533d4c9-53ad-455f-9db7-827245c43d24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d533d4c9-53ad-455f-9db7-827245c43d24\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de93ef36a3acf049f0dff48064a98354008e521ee562dddd1e6894d45770836f/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.956099 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6016745-1634-4eb6-afee-b98ce9ab8f56-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.959717 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6785l\" (UniqueName: \"kubernetes.io/projected/a6016745-1634-4eb6-afee-b98ce9ab8f56-kube-api-access-6785l\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.978955 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 14:25:40 crc kubenswrapper[4836]: I0217 14:25:40.979703 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6016745-1634-4eb6-afee-b98ce9ab8f56-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.012933 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d533d4c9-53ad-455f-9db7-827245c43d24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d533d4c9-53ad-455f-9db7-827245c43d24\") pod \"openstack-cell1-galera-0\" (UID: \"a6016745-1634-4eb6-afee-b98ce9ab8f56\") " pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.083680 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce3babe4-6d77-45ce-b9cc-626678d3ec64-kolla-config\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.083775 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3babe4-6d77-45ce-b9cc-626678d3ec64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.083819 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce3babe4-6d77-45ce-b9cc-626678d3ec64-config-data\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.083863 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpcn\" (UniqueName: \"kubernetes.io/projected/ce3babe4-6d77-45ce-b9cc-626678d3ec64-kube-api-access-wcpcn\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.083883 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce3babe4-6d77-45ce-b9cc-626678d3ec64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.136952 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.185323 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcpcn\" (UniqueName: \"kubernetes.io/projected/ce3babe4-6d77-45ce-b9cc-626678d3ec64-kube-api-access-wcpcn\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.185377 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce3babe4-6d77-45ce-b9cc-626678d3ec64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.185432 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce3babe4-6d77-45ce-b9cc-626678d3ec64-kolla-config\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.185490 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3babe4-6d77-45ce-b9cc-626678d3ec64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.185531 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce3babe4-6d77-45ce-b9cc-626678d3ec64-config-data\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.186269 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce3babe4-6d77-45ce-b9cc-626678d3ec64-config-data\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.188660 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce3babe4-6d77-45ce-b9cc-626678d3ec64-kolla-config\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.195984 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce3babe4-6d77-45ce-b9cc-626678d3ec64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.209415 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3babe4-6d77-45ce-b9cc-626678d3ec64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.288352 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcpcn\" (UniqueName: \"kubernetes.io/projected/ce3babe4-6d77-45ce-b9cc-626678d3ec64-kube-api-access-wcpcn\") pod \"memcached-0\" (UID: \"ce3babe4-6d77-45ce-b9cc-626678d3ec64\") " pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.310521 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 14:25:41 crc kubenswrapper[4836]: I0217 14:25:41.460492 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2fd891e0-6f97-4fa3-8281-aa97232d6c6d","Type":"ContainerStarted","Data":"b7ccab72d6bfe7a32d290bc2bd21c7153f92a355a14103904bd51d275095d6c6"} Feb 17 14:25:42 crc kubenswrapper[4836]: I0217 14:25:42.019420 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 14:25:42 crc kubenswrapper[4836]: I0217 14:25:42.364481 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 14:25:43 crc kubenswrapper[4836]: I0217 14:25:43.515670 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:25:43 crc kubenswrapper[4836]: I0217 14:25:43.519860 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 14:25:43 crc kubenswrapper[4836]: I0217 14:25:43.525424 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vzz5b" Feb 17 14:25:43 crc kubenswrapper[4836]: I0217 14:25:43.532400 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:25:43 crc kubenswrapper[4836]: I0217 14:25:43.636693 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnv8r\" (UniqueName: \"kubernetes.io/projected/87197028-3222-4c04-89a7-135997258e0d-kube-api-access-wnv8r\") pod \"kube-state-metrics-0\" (UID: \"87197028-3222-4c04-89a7-135997258e0d\") " pod="openstack/kube-state-metrics-0" Feb 17 14:25:43 crc kubenswrapper[4836]: I0217 14:25:43.750506 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnv8r\" (UniqueName: \"kubernetes.io/projected/87197028-3222-4c04-89a7-135997258e0d-kube-api-access-wnv8r\") pod \"kube-state-metrics-0\" (UID: \"87197028-3222-4c04-89a7-135997258e0d\") " pod="openstack/kube-state-metrics-0" Feb 17 14:25:43 crc kubenswrapper[4836]: I0217 14:25:43.813373 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnv8r\" (UniqueName: \"kubernetes.io/projected/87197028-3222-4c04-89a7-135997258e0d-kube-api-access-wnv8r\") pod \"kube-state-metrics-0\" (UID: \"87197028-3222-4c04-89a7-135997258e0d\") " pod="openstack/kube-state-metrics-0" Feb 17 14:25:43 crc kubenswrapper[4836]: I0217 14:25:43.859895 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.798264 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.801999 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.810626 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.810788 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.810941 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.811010 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.811174 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-mlnwd" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.817094 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.928203 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5jwk\" (UniqueName: \"kubernetes.io/projected/039a526c-4f5a-4641-9340-b18459145569-kube-api-access-p5jwk\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.928268 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/039a526c-4f5a-4641-9340-b18459145569-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.928319 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/039a526c-4f5a-4641-9340-b18459145569-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.932900 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/039a526c-4f5a-4641-9340-b18459145569-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.933042 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.933325 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:44 crc kubenswrapper[4836]: I0217 14:25:44.933399 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.035702 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.035792 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.035840 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5jwk\" (UniqueName: \"kubernetes.io/projected/039a526c-4f5a-4641-9340-b18459145569-kube-api-access-p5jwk\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.035863 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/039a526c-4f5a-4641-9340-b18459145569-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.035888 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/039a526c-4f5a-4641-9340-b18459145569-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.035925 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/039a526c-4f5a-4641-9340-b18459145569-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.035950 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.038244 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/039a526c-4f5a-4641-9340-b18459145569-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.043716 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.044733 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.063123 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/039a526c-4f5a-4641-9340-b18459145569-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.068041 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5jwk\" (UniqueName: \"kubernetes.io/projected/039a526c-4f5a-4641-9340-b18459145569-kube-api-access-p5jwk\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.071092 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/039a526c-4f5a-4641-9340-b18459145569-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.079799 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/039a526c-4f5a-4641-9340-b18459145569-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"039a526c-4f5a-4641-9340-b18459145569\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.103061 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.107602 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.120436 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.125330 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.125567 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.126615 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.126704 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.126966 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-x7d2x" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.127070 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.129408 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.135794 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.140001 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.145042 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.145188 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.145249 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.145312 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.145381 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.145415 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.145453 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.146768 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.146855 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8z8l\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-kube-api-access-t8z8l\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.146915 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.248805 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.248904 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8z8l\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-kube-api-access-t8z8l\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.248942 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.249008 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.249046 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.249078 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.249098 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.249130 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.249150 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.249173 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.250359 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.250664 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.253127 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.261122 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.261181 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94da064c7e93eda9403c837c8900dc0ec43041d0305170815d7b87148c388206/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.265589 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.269561 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8z8l\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-kube-api-access-t8z8l\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.271230 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.273942 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.275604 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.279915 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.338058 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"prometheus-metric-storage-0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:45 crc kubenswrapper[4836]: I0217 14:25:45.471196 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.463830 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ghk5k"] Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.465022 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.468475 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-llqfn" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.473082 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.473350 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.498490 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ghk5k"] Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.563743 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-j4jj9"] Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.570961 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.596177 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-run\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.596336 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-log-ovn\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.596374 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-run-ovn\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.596405 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5949d44f-ef6d-417e-9035-9b235cd59863-ovn-controller-tls-certs\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.596436 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6bz\" (UniqueName: \"kubernetes.io/projected/5949d44f-ef6d-417e-9035-9b235cd59863-kube-api-access-9d6bz\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.596454 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5949d44f-ef6d-417e-9035-9b235cd59863-combined-ca-bundle\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.596483 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5949d44f-ef6d-417e-9035-9b235cd59863-scripts\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.605539 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-j4jj9"] Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698107 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-run-ovn\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698168 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5949d44f-ef6d-417e-9035-9b235cd59863-ovn-controller-tls-certs\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698194 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-run\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698221 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6bz\" (UniqueName: \"kubernetes.io/projected/5949d44f-ef6d-417e-9035-9b235cd59863-kube-api-access-9d6bz\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698237 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5949d44f-ef6d-417e-9035-9b235cd59863-combined-ca-bundle\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698273 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5949d44f-ef6d-417e-9035-9b235cd59863-scripts\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698413 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cefe420d-f25c-4681-9ae8-b61f0a354282-scripts\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698442 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-log\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698458 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-lib\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698490 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-run\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698522 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-etc-ovs\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698540 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-log-ovn\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.698572 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dszrb\" (UniqueName: \"kubernetes.io/projected/cefe420d-f25c-4681-9ae8-b61f0a354282-kube-api-access-dszrb\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.699255 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-run-ovn\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.701118 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-run\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.701449 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5949d44f-ef6d-417e-9035-9b235cd59863-var-log-ovn\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.702873 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5949d44f-ef6d-417e-9035-9b235cd59863-scripts\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.714240 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5949d44f-ef6d-417e-9035-9b235cd59863-ovn-controller-tls-certs\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.729428 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5949d44f-ef6d-417e-9035-9b235cd59863-combined-ca-bundle\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.790100 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6bz\" (UniqueName: \"kubernetes.io/projected/5949d44f-ef6d-417e-9035-9b235cd59863-kube-api-access-9d6bz\") pod \"ovn-controller-ghk5k\" (UID: \"5949d44f-ef6d-417e-9035-9b235cd59863\") " pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.799969 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-log\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.800023 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-lib\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.800164 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-etc-ovs\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.800198 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dszrb\" (UniqueName: \"kubernetes.io/projected/cefe420d-f25c-4681-9ae8-b61f0a354282-kube-api-access-dszrb\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.800272 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-run\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.800429 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cefe420d-f25c-4681-9ae8-b61f0a354282-scripts\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.801832 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-lib\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.801972 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-log\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.802553 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-var-run\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.802751 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cefe420d-f25c-4681-9ae8-b61f0a354282-etc-ovs\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.806483 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cefe420d-f25c-4681-9ae8-b61f0a354282-scripts\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.831007 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dszrb\" (UniqueName: \"kubernetes.io/projected/cefe420d-f25c-4681-9ae8-b61f0a354282-kube-api-access-dszrb\") pod \"ovn-controller-ovs-j4jj9\" (UID: \"cefe420d-f25c-4681-9ae8-b61f0a354282\") " pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.831464 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ghk5k" Feb 17 14:25:46 crc kubenswrapper[4836]: I0217 14:25:46.899588 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:25:47 crc kubenswrapper[4836]: I0217 14:25:47.835672 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 14:25:47 crc kubenswrapper[4836]: I0217 14:25:47.837690 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:47 crc kubenswrapper[4836]: I0217 14:25:47.842747 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 14:25:47 crc kubenswrapper[4836]: I0217 14:25:47.842995 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 14:25:47 crc kubenswrapper[4836]: I0217 14:25:47.843221 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 14:25:47 crc kubenswrapper[4836]: I0217 14:25:47.843377 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 14:25:47 crc kubenswrapper[4836]: I0217 14:25:47.843537 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-r7fq2" Feb 17 14:25:47 crc kubenswrapper[4836]: I0217 14:25:47.850815 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.031541 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.031641 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4xz\" (UniqueName: \"kubernetes.io/projected/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-kube-api-access-8v4xz\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.032414 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4f4437ae-059c-47fb-bde0-9623e03fca9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f4437ae-059c-47fb-bde0-9623e03fca9c\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.032549 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.032631 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.032727 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.032865 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-config\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.033125 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.136863 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.137038 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.137950 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.137984 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v4xz\" (UniqueName: \"kubernetes.io/projected/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-kube-api-access-8v4xz\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.138135 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4f4437ae-059c-47fb-bde0-9623e03fca9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f4437ae-059c-47fb-bde0-9623e03fca9c\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.138228 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.138328 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.138434 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.138595 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-config\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.142640 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.143330 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-config\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.143799 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.144081 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.144747 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.146608 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.146667 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4f4437ae-059c-47fb-bde0-9623e03fca9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f4437ae-059c-47fb-bde0-9623e03fca9c\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b21ed61e8b484d21a8479a2c41be99518c171635272917cf20fee632a18901a/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.173427 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v4xz\" (UniqueName: \"kubernetes.io/projected/348d02a8-d1b2-4bd3-9f4c-9153e24a5f19-kube-api-access-8v4xz\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.215504 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4f4437ae-059c-47fb-bde0-9623e03fca9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f4437ae-059c-47fb-bde0-9623e03fca9c\") pod \"ovsdbserver-sb-0\" (UID: \"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19\") " pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:48 crc kubenswrapper[4836]: I0217 14:25:48.482516 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.707835 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.712032 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.717512 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.717862 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cxtfj" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.718114 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.723255 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.755008 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.852374 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55bc1962-7790-448a-838c-cb13a870ea23-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.852510 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bc1962-7790-448a-838c-cb13a870ea23-config\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.852617 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.852703 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.852774 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ca3a731-ad92-46ba-aef4-201b1c5ff483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ca3a731-ad92-46ba-aef4-201b1c5ff483\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.852844 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55bc1962-7790-448a-838c-cb13a870ea23-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.852939 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69mz\" (UniqueName: \"kubernetes.io/projected/55bc1962-7790-448a-838c-cb13a870ea23-kube-api-access-w69mz\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.853005 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.955035 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55bc1962-7790-448a-838c-cb13a870ea23-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.955467 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bc1962-7790-448a-838c-cb13a870ea23-config\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.955511 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.955546 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.955600 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ca3a731-ad92-46ba-aef4-201b1c5ff483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ca3a731-ad92-46ba-aef4-201b1c5ff483\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.955621 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55bc1962-7790-448a-838c-cb13a870ea23-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.955658 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69mz\" (UniqueName: \"kubernetes.io/projected/55bc1962-7790-448a-838c-cb13a870ea23-kube-api-access-w69mz\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.955675 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.956155 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55bc1962-7790-448a-838c-cb13a870ea23-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.957253 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bc1962-7790-448a-838c-cb13a870ea23-config\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.958334 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55bc1962-7790-448a-838c-cb13a870ea23-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.961972 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.962031 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ca3a731-ad92-46ba-aef4-201b1c5ff483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ca3a731-ad92-46ba-aef4-201b1c5ff483\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/acb6dfbf3ca60bf020d14ded6b9677efb27a4261f5bb0945a55b0cd863775a84/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.963844 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.967884 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.977147 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a6016745-1634-4eb6-afee-b98ce9ab8f56","Type":"ContainerStarted","Data":"d9ff2705d4d9971449e56a8df2c8bcf12c30b8741924c07275058e3b28283829"} Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.979204 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69mz\" (UniqueName: \"kubernetes.io/projected/55bc1962-7790-448a-838c-cb13a870ea23-kube-api-access-w69mz\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:50 crc kubenswrapper[4836]: I0217 14:25:50.984145 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55bc1962-7790-448a-838c-cb13a870ea23-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:51 crc kubenswrapper[4836]: I0217 14:25:51.010774 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ca3a731-ad92-46ba-aef4-201b1c5ff483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ca3a731-ad92-46ba-aef4-201b1c5ff483\") pod \"ovsdbserver-nb-0\" (UID: \"55bc1962-7790-448a-838c-cb13a870ea23\") " pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:51 crc kubenswrapper[4836]: I0217 14:25:51.042284 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 14:25:51 crc kubenswrapper[4836]: I0217 14:25:51.991314 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ce3babe4-6d77-45ce-b9cc-626678d3ec64","Type":"ContainerStarted","Data":"215ebeb5a59ba861cc33d511721255c32dabb5083993a84473b1381d4f746889"} Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.035560 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.319250 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.321006 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.324779 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.325061 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.325207 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.325482 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-fc2f6" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.325503 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.333758 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.488514 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.488653 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.488746 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.488796 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.488828 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljr6\" (UniqueName: \"kubernetes.io/projected/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-kube-api-access-vljr6\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.540872 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.542397 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.544674 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.545737 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.552955 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.554791 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.591065 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.591129 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljr6\" (UniqueName: \"kubernetes.io/projected/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-kube-api-access-vljr6\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.591189 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.591287 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.591367 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.596409 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.597864 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.605616 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.613770 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.657779 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.659223 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.669848 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.670059 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.674119 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljr6\" (UniqueName: \"kubernetes.io/projected/33c54f8c-91c4-4742-b545-d0e2c4e85fe2-kube-api-access-vljr6\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-r4gdh\" (UID: \"33c54f8c-91c4-4742-b545-d0e2c4e85fe2\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.687511 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.735688 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/487d19a3-7f23-4945-bfe1-6231a37a84c6-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.735743 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.735879 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.735918 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8kl\" (UniqueName: \"kubernetes.io/projected/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-kube-api-access-fc8kl\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.735998 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.736109 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.736132 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.736186 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.736247 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmwjc\" (UniqueName: \"kubernetes.io/projected/487d19a3-7f23-4945-bfe1-6231a37a84c6-kube-api-access-dmwjc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.736283 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.736436 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.837652 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc8kl\" (UniqueName: \"kubernetes.io/projected/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-kube-api-access-fc8kl\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.837731 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.837798 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.837825 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.837968 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.838859 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.839879 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmwjc\" (UniqueName: \"kubernetes.io/projected/487d19a3-7f23-4945-bfe1-6231a37a84c6-kube-api-access-dmwjc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.839918 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.839976 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.840073 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/487d19a3-7f23-4945-bfe1-6231a37a84c6-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.840125 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.840216 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.844636 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.844930 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/487d19a3-7f23-4945-bfe1-6231a37a84c6-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.845583 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.847284 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.851007 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.851706 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.852030 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/487d19a3-7f23-4945-bfe1-6231a37a84c6-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.890810 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.891874 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc8kl\" (UniqueName: \"kubernetes.io/projected/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-kube-api-access-fc8kl\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.892753 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.896588 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.896881 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.897202 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.897520 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.899277 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.899387 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.902954 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmwjc\" (UniqueName: \"kubernetes.io/projected/487d19a3-7f23-4945-bfe1-6231a37a84c6-kube-api-access-dmwjc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j\" (UID: \"487d19a3-7f23-4945-bfe1-6231a37a84c6\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.912144 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/27c5f450-8bef-4732-a7fb-272d9b5a4ea8-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-fsq2h\" (UID: \"27c5f450-8bef-4732-a7fb-272d9b5a4ea8\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.912229 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.942850 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.942930 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.942966 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.942988 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.943014 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.943066 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.943126 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.943151 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2pt\" (UniqueName: \"kubernetes.io/projected/974f66b3-690f-4008-949d-1d57c978d427-kube-api-access-tl2pt\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.943193 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.943468 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.977527 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf"] Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.979251 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:56 crc kubenswrapper[4836]: I0217 14:25:56.996911 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf"] Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.027875 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-s855t" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.045252 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.059949 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.062141 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.062591 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.085277 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.100180 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl2pt\" (UniqueName: \"kubernetes.io/projected/974f66b3-690f-4008-949d-1d57c978d427-kube-api-access-tl2pt\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.101609 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.102339 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.102564 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.065755 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.099968 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.046956 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.068675 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.081028 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.111264 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.080447 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.112824 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.114407 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/974f66b3-690f-4008-949d-1d57c978d427-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.130708 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/974f66b3-690f-4008-949d-1d57c978d427-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.146039 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl2pt\" (UniqueName: \"kubernetes.io/projected/974f66b3-690f-4008-949d-1d57c978d427-kube-api-access-tl2pt\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-q78z5\" (UID: \"974f66b3-690f-4008-949d-1d57c978d427\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.177497 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212525 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212599 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212654 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212694 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212749 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212778 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212847 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212884 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8s9\" (UniqueName: \"kubernetes.io/projected/a977b831-7959-4509-93bf-a45b375ca722-kube-api-access-7f8s9\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.212920 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.273654 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316443 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316512 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8s9\" (UniqueName: \"kubernetes.io/projected/a977b831-7959-4509-93bf-a45b375ca722-kube-api-access-7f8s9\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316544 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316599 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316616 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316650 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316676 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316709 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.316725 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.318536 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.318592 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.318628 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.318905 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.319432 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a977b831-7959-4509-93bf-a45b375ca722-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.320617 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.320651 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.321096 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a977b831-7959-4509-93bf-a45b375ca722-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.343106 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8s9\" (UniqueName: \"kubernetes.io/projected/a977b831-7959-4509-93bf-a45b375ca722-kube-api-access-7f8s9\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nbvnf\" (UID: \"a977b831-7959-4509-93bf-a45b375ca722\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.347983 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.495392 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.496569 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.498572 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.499441 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.552190 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.633719 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.633859 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.633915 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.633984 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.634116 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c33fb01-9bf7-43f1-86d5-004e70d3721c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.634154 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.634204 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hckzp\" (UniqueName: \"kubernetes.io/projected/1c33fb01-9bf7-43f1-86d5-004e70d3721c-kube-api-access-hckzp\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.634255 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.653570 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.655002 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.659380 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.659575 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.695144 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735558 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735626 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735656 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735716 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735754 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9wnr\" (UniqueName: \"kubernetes.io/projected/e2c3e649-7933-49e2-800c-b66dbd377ac6-kube-api-access-z9wnr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735782 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c33fb01-9bf7-43f1-86d5-004e70d3721c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735819 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735846 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.735964 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c3e649-7933-49e2-800c-b66dbd377ac6-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736071 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hckzp\" (UniqueName: \"kubernetes.io/projected/1c33fb01-9bf7-43f1-86d5-004e70d3721c-kube-api-access-hckzp\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736110 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736163 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736192 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736310 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736409 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736456 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736543 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.736882 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c33fb01-9bf7-43f1-86d5-004e70d3721c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.737085 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.741065 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.742144 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.745431 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/1c33fb01-9bf7-43f1-86d5-004e70d3721c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.762335 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hckzp\" (UniqueName: \"kubernetes.io/projected/1c33fb01-9bf7-43f1-86d5-004e70d3721c-kube-api-access-hckzp\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.763634 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.766244 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"1c33fb01-9bf7-43f1-86d5-004e70d3721c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.799202 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.802466 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.805618 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.806016 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.822557 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.839706 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c3e649-7933-49e2-800c-b66dbd377ac6-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.839782 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.839840 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.839894 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.839946 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmc66\" (UniqueName: \"kubernetes.io/projected/d370240e-d6c1-4d9c-9877-293afa6e77f2-kube-api-access-rmc66\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.839989 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.840017 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d370240e-d6c1-4d9c-9877-293afa6e77f2-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.840047 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.840100 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.840130 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.840168 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.840331 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.840420 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9wnr\" (UniqueName: \"kubernetes.io/projected/e2c3e649-7933-49e2-800c-b66dbd377ac6-kube-api-access-z9wnr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.840496 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.842045 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.842343 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c3e649-7933-49e2-800c-b66dbd377ac6-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.843214 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.843991 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.858023 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.862448 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e2c3e649-7933-49e2-800c-b66dbd377ac6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.900525 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9wnr\" (UniqueName: \"kubernetes.io/projected/e2c3e649-7933-49e2-800c-b66dbd377ac6-kube-api-access-z9wnr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.917147 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e2c3e649-7933-49e2-800c-b66dbd377ac6\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.918200 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.942742 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.942936 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.943054 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmc66\" (UniqueName: \"kubernetes.io/projected/d370240e-d6c1-4d9c-9877-293afa6e77f2-kube-api-access-rmc66\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.943139 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.943200 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d370240e-d6c1-4d9c-9877-293afa6e77f2-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.943226 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.944065 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.944145 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.944455 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d370240e-d6c1-4d9c-9877-293afa6e77f2-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.944641 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.952479 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.952859 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.955049 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d370240e-d6c1-4d9c-9877-293afa6e77f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.967210 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmc66\" (UniqueName: \"kubernetes.io/projected/d370240e-d6c1-4d9c-9877-293afa6e77f2-kube-api-access-rmc66\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.982404 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d370240e-d6c1-4d9c-9877-293afa6e77f2\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:25:57 crc kubenswrapper[4836]: I0217 14:25:57.996925 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:25:58 crc kubenswrapper[4836]: I0217 14:25:58.146056 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:26:00 crc kubenswrapper[4836]: I0217 14:26:00.135502 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:26:00 crc kubenswrapper[4836]: I0217 14:26:00.135598 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:26:05 crc kubenswrapper[4836]: E0217 14:26:05.777237 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 14:26:05 crc kubenswrapper[4836]: E0217 14:26:05.779314 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6785l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(a6016745-1634-4eb6-afee-b98ce9ab8f56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:26:05 crc kubenswrapper[4836]: E0217 14:26:05.781031 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="a6016745-1634-4eb6-afee-b98ce9ab8f56" Feb 17 14:26:06 crc kubenswrapper[4836]: E0217 14:26:06.238758 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="a6016745-1634-4eb6-afee-b98ce9ab8f56" Feb 17 14:26:06 crc kubenswrapper[4836]: E0217 14:26:06.492698 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 17 14:26:06 crc kubenswrapper[4836]: E0217 14:26:06.492901 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5b4h5fbh5d6h5f9h9bh659h675h56dhbch5fdh68bh9fh699h5b6h5b5h668hcdhcdh65h68fh649h59h8bh64fh65bhc7h569h68hb8h544h5bbh686q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcpcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(ce3babe4-6d77-45ce-b9cc-626678d3ec64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:26:06 crc kubenswrapper[4836]: E0217 14:26:06.494003 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="ce3babe4-6d77-45ce-b9cc-626678d3ec64" Feb 17 14:26:07 crc kubenswrapper[4836]: I0217 14:26:07.249322 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerStarted","Data":"cefd70541e5e6c57648aaec13bc3ac8008ad32d2cca2fd2d95d8a18012223fb3"} Feb 17 14:26:07 crc kubenswrapper[4836]: I0217 14:26:07.250909 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55bc1962-7790-448a-838c-cb13a870ea23","Type":"ContainerStarted","Data":"06062ba94e5713ddd8c227b30be89f5edc2cd18f7f07ef99efd389454307ed51"} Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.252489 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="ce3babe4-6d77-45ce-b9cc-626678d3ec64" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.625470 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.625719 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zckh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-jbcz5_openstack(e14b6d2f-85ef-4f0c-8a81-426aee02b456): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.626944 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.636035 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.636208 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qmnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-46wms_openstack(f1ebdbfb-7f75-4205-80ca-0ee085a21c0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.637352 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" podUID="f1ebdbfb-7f75-4205-80ca-0ee085a21c0b" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.639898 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.640077 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvxj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ggz9w_openstack(24a665ea-1793-426d-b4df-48bfdd048f1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.641776 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" podUID="24a665ea-1793-426d-b4df-48bfdd048f1c" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.673261 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.673438 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ntrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-s6vqb_openstack(63d320ce-8669-4285-b4bc-dbb6eeb9a190): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:26:07 crc kubenswrapper[4836]: E0217 14:26:07.675400 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" Feb 17 14:26:08 crc kubenswrapper[4836]: I0217 14:26:08.162258 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 14:26:08 crc kubenswrapper[4836]: I0217 14:26:08.323340 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"039a526c-4f5a-4641-9340-b18459145569","Type":"ContainerStarted","Data":"4cbe90198ede73e79e317e522482bdcf15991436a05ef6e581e62bb3968c9ce7"} Feb 17 14:26:08 crc kubenswrapper[4836]: E0217 14:26:08.334512 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" Feb 17 14:26:08 crc kubenswrapper[4836]: E0217 14:26:08.343536 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" Feb 17 14:26:08 crc kubenswrapper[4836]: I0217 14:26:08.421123 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 14:26:08 crc kubenswrapper[4836]: I0217 14:26:08.991980 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ghk5k"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.008366 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:26:09 crc kubenswrapper[4836]: W0217 14:26:09.067896 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5949d44f_ef6d_417e_9035_9b235cd59863.slice/crio-b5ab6de8a0439c7bcacdde47f557ba468960c7db13041b91e5eadd707b7c9b08 WatchSource:0}: Error finding container b5ab6de8a0439c7bcacdde47f557ba468960c7db13041b91e5eadd707b7c9b08: Status 404 returned error can't find the container with id b5ab6de8a0439c7bcacdde47f557ba468960c7db13041b91e5eadd707b7c9b08 Feb 17 14:26:09 crc kubenswrapper[4836]: W0217 14:26:09.079749 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87197028_3222_4c04_89a7_135997258e0d.slice/crio-ac1cfd9dcf6c1abc6e025d9d148f792c18f57e036146c98f80e4d81f4745553b WatchSource:0}: Error finding container ac1cfd9dcf6c1abc6e025d9d148f792c18f57e036146c98f80e4d81f4745553b: Status 404 returned error can't find the container with id ac1cfd9dcf6c1abc6e025d9d148f792c18f57e036146c98f80e4d81f4745553b Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.347307 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ghk5k" event={"ID":"5949d44f-ef6d-417e-9035-9b235cd59863","Type":"ContainerStarted","Data":"b5ab6de8a0439c7bcacdde47f557ba468960c7db13041b91e5eadd707b7c9b08"} Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.357983 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19","Type":"ContainerStarted","Data":"dfa019eb6dfc780d8a7bb7c10f837f86c6cf9b05a42be0d211fd953b53b28d68"} Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.363844 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"87197028-3222-4c04-89a7-135997258e0d","Type":"ContainerStarted","Data":"ac1cfd9dcf6c1abc6e025d9d148f792c18f57e036146c98f80e4d81f4745553b"} Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.372165 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2fd891e0-6f97-4fa3-8281-aa97232d6c6d","Type":"ContainerStarted","Data":"7589ff250191c7eebfbce02cc148fe3104e0d0057941b75d9ae842fb9b393bcb"} Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.452652 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-j4jj9"] Feb 17 14:26:09 crc kubenswrapper[4836]: W0217 14:26:09.467223 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcefe420d_f25c_4681_9ae8_b61f0a354282.slice/crio-f7165b4765ba88e3f536fab00d381ac7718ee33e729a9b3168c1033bcef519d3 WatchSource:0}: Error finding container f7165b4765ba88e3f536fab00d381ac7718ee33e729a9b3168c1033bcef519d3: Status 404 returned error can't find the container with id f7165b4765ba88e3f536fab00d381ac7718ee33e729a9b3168c1033bcef519d3 Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.689625 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.724446 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.753874 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.767584 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.778231 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.791356 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.802320 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.812596 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh"] Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.820677 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.821268 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:26:09 crc kubenswrapper[4836]: W0217 14:26:09.936991 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod487d19a3_7f23_4945_bfe1_6231a37a84c6.slice/crio-ba38186ea7f2c89b3b027c148184bd75cf8b2c02b4be30bfeeca2d6fd8389527 WatchSource:0}: Error finding container ba38186ea7f2c89b3b027c148184bd75cf8b2c02b4be30bfeeca2d6fd8389527: Status 404 returned error can't find the container with id ba38186ea7f2c89b3b027c148184bd75cf8b2c02b4be30bfeeca2d6fd8389527 Feb 17 14:26:09 crc kubenswrapper[4836]: W0217 14:26:09.941843 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2c3e649_7933_49e2_800c_b66dbd377ac6.slice/crio-ce854d82f7fdab0f8db2fadd426f8a89bbf9188aaeda8fb1b8be61e0563586d8 WatchSource:0}: Error finding container ce854d82f7fdab0f8db2fadd426f8a89bbf9188aaeda8fb1b8be61e0563586d8: Status 404 returned error can't find the container with id ce854d82f7fdab0f8db2fadd426f8a89bbf9188aaeda8fb1b8be61e0563586d8 Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.945585 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-config\") pod \"24a665ea-1793-426d-b4df-48bfdd048f1c\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.945643 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qmnr\" (UniqueName: \"kubernetes.io/projected/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-kube-api-access-8qmnr\") pod \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\" (UID: \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\") " Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.945719 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-config\") pod \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\" (UID: \"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b\") " Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.945794 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-dns-svc\") pod \"24a665ea-1793-426d-b4df-48bfdd048f1c\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.945851 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvxj4\" (UniqueName: \"kubernetes.io/projected/24a665ea-1793-426d-b4df-48bfdd048f1c-kube-api-access-pvxj4\") pod \"24a665ea-1793-426d-b4df-48bfdd048f1c\" (UID: \"24a665ea-1793-426d-b4df-48bfdd048f1c\") " Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.948957 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24a665ea-1793-426d-b4df-48bfdd048f1c" (UID: "24a665ea-1793-426d-b4df-48bfdd048f1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.950606 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-config" (OuterVolumeSpecName: "config") pod "24a665ea-1793-426d-b4df-48bfdd048f1c" (UID: "24a665ea-1793-426d-b4df-48bfdd048f1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.955291 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-kube-api-access-8qmnr" (OuterVolumeSpecName: "kube-api-access-8qmnr") pod "f1ebdbfb-7f75-4205-80ca-0ee085a21c0b" (UID: "f1ebdbfb-7f75-4205-80ca-0ee085a21c0b"). InnerVolumeSpecName "kube-api-access-8qmnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.955433 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a665ea-1793-426d-b4df-48bfdd048f1c-kube-api-access-pvxj4" (OuterVolumeSpecName: "kube-api-access-pvxj4") pod "24a665ea-1793-426d-b4df-48bfdd048f1c" (UID: "24a665ea-1793-426d-b4df-48bfdd048f1c"). InnerVolumeSpecName "kube-api-access-pvxj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:09 crc kubenswrapper[4836]: I0217 14:26:09.961777 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-config" (OuterVolumeSpecName: "config") pod "f1ebdbfb-7f75-4205-80ca-0ee085a21c0b" (UID: "f1ebdbfb-7f75-4205-80ca-0ee085a21c0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:09 crc kubenswrapper[4836]: W0217 14:26:09.965796 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33c54f8c_91c4_4742_b545_d0e2c4e85fe2.slice/crio-2aea778e655b61fe26ab4925dc0dec08762b82aac7ed8bba7974e98ef9d0e2f2 WatchSource:0}: Error finding container 2aea778e655b61fe26ab4925dc0dec08762b82aac7ed8bba7974e98ef9d0e2f2: Status 404 returned error can't find the container with id 2aea778e655b61fe26ab4925dc0dec08762b82aac7ed8bba7974e98ef9d0e2f2 Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.048633 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.048687 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qmnr\" (UniqueName: \"kubernetes.io/projected/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-kube-api-access-8qmnr\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.048701 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.048715 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a665ea-1793-426d-b4df-48bfdd048f1c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.048727 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvxj4\" (UniqueName: \"kubernetes.io/projected/24a665ea-1793-426d-b4df-48bfdd048f1c-kube-api-access-pvxj4\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.386395 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" event={"ID":"a977b831-7959-4509-93bf-a45b375ca722","Type":"ContainerStarted","Data":"197e017888d334883246dc517cc6ebcda5d85e0d0ec5f5b401f0cc5f753788a8"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.387988 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-j4jj9" event={"ID":"cefe420d-f25c-4681-9ae8-b61f0a354282","Type":"ContainerStarted","Data":"f7165b4765ba88e3f536fab00d381ac7718ee33e729a9b3168c1033bcef519d3"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.391543 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6f866bb7-5209-4275-8884-df6f074b3f7c","Type":"ContainerStarted","Data":"85576fe15acb4ec82e880a96b65a7ac8f381e29f3114bed6ed63c37985fe03f0"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.393981 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" event={"ID":"27c5f450-8bef-4732-a7fb-272d9b5a4ea8","Type":"ContainerStarted","Data":"402269aa664e653829b9605208deffb731e1154b5ccc2f3a77365bf572021284"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.397077 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"e2c3e649-7933-49e2-800c-b66dbd377ac6","Type":"ContainerStarted","Data":"ce854d82f7fdab0f8db2fadd426f8a89bbf9188aaeda8fb1b8be61e0563586d8"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.398991 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" event={"ID":"33c54f8c-91c4-4742-b545-d0e2c4e85fe2","Type":"ContainerStarted","Data":"2aea778e655b61fe26ab4925dc0dec08762b82aac7ed8bba7974e98ef9d0e2f2"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.400013 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" event={"ID":"974f66b3-690f-4008-949d-1d57c978d427","Type":"ContainerStarted","Data":"909cbde8c79ad044d96184d526b2b525ce1fb1c3c2bb3d48d600956da9444a32"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.403232 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" event={"ID":"24a665ea-1793-426d-b4df-48bfdd048f1c","Type":"ContainerDied","Data":"a6bb86072d0798a1afb87a4b547d5cc9e6e5bb7f7f723d97aa4e7592e494e55c"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.403328 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ggz9w" Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.407001 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"1c33fb01-9bf7-43f1-86d5-004e70d3721c","Type":"ContainerStarted","Data":"da5a1432ebe8c39c9387ed27f7ccb7165ee9a6ce317dda015178127c8bea9a8f"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.409573 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec9408e6-0474-4f84-842e-b1c20f42a7b8","Type":"ContainerStarted","Data":"1e0077eb33d7cdccabd3d53eadba26bb33ef9899ccdc0c0e3003d7b300233249"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.412452 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" event={"ID":"487d19a3-7f23-4945-bfe1-6231a37a84c6","Type":"ContainerStarted","Data":"ba38186ea7f2c89b3b027c148184bd75cf8b2c02b4be30bfeeca2d6fd8389527"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.418129 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" event={"ID":"f1ebdbfb-7f75-4205-80ca-0ee085a21c0b","Type":"ContainerDied","Data":"b0dca2b7fd572359a51505f1dec3dc3d3db3e7f58bf21d6f39749cf427d85d3b"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.418207 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-46wms" Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.447720 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"d370240e-d6c1-4d9c-9877-293afa6e77f2","Type":"ContainerStarted","Data":"15680e21e6ac532d32c569f9b60dab424465fbc5f504aee69bdfe34e5577a70d"} Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.529734 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ggz9w"] Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.542216 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ggz9w"] Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.638051 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a665ea-1793-426d-b4df-48bfdd048f1c" path="/var/lib/kubelet/pods/24a665ea-1793-426d-b4df-48bfdd048f1c/volumes" Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.643406 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-46wms"] Feb 17 14:26:10 crc kubenswrapper[4836]: I0217 14:26:10.643706 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-46wms"] Feb 17 14:26:12 crc kubenswrapper[4836]: I0217 14:26:12.582340 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ebdbfb-7f75-4205-80ca-0ee085a21c0b" path="/var/lib/kubelet/pods/f1ebdbfb-7f75-4205-80ca-0ee085a21c0b/volumes" Feb 17 14:26:13 crc kubenswrapper[4836]: I0217 14:26:13.496160 4836 generic.go:334] "Generic (PLEG): container finished" podID="2fd891e0-6f97-4fa3-8281-aa97232d6c6d" containerID="7589ff250191c7eebfbce02cc148fe3104e0d0057941b75d9ae842fb9b393bcb" exitCode=0 Feb 17 14:26:13 crc kubenswrapper[4836]: I0217 14:26:13.496211 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2fd891e0-6f97-4fa3-8281-aa97232d6c6d","Type":"ContainerDied","Data":"7589ff250191c7eebfbce02cc148fe3104e0d0057941b75d9ae842fb9b393bcb"} Feb 17 14:26:19 crc kubenswrapper[4836]: I0217 14:26:19.727910 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2fd891e0-6f97-4fa3-8281-aa97232d6c6d","Type":"ContainerStarted","Data":"cbff2d76a45a19fc91e95a754dc92867ba6368787be797f1530c25ebdc789c33"} Feb 17 14:26:19 crc kubenswrapper[4836]: I0217 14:26:19.731007 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" event={"ID":"487d19a3-7f23-4945-bfe1-6231a37a84c6","Type":"ContainerStarted","Data":"2c7e51c42a8648fdf229dc91eb17c49c900557e3036f650c0634dfc08051dcbb"} Feb 17 14:26:19 crc kubenswrapper[4836]: I0217 14:26:19.731195 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:26:19 crc kubenswrapper[4836]: I0217 14:26:19.788401 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=14.216513853 podStartE2EDuration="41.788362695s" podCreationTimestamp="2026-02-17 14:25:38 +0000 UTC" firstStartedPulling="2026-02-17 14:25:40.357838521 +0000 UTC m=+1166.700789321" lastFinishedPulling="2026-02-17 14:26:07.929709894 +0000 UTC m=+1194.272638163" observedRunningTime="2026-02-17 14:26:19.755009892 +0000 UTC m=+1206.097938181" watchObservedRunningTime="2026-02-17 14:26:19.788362695 +0000 UTC m=+1206.131291134" Feb 17 14:26:19 crc kubenswrapper[4836]: I0217 14:26:19.791913 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" podStartSLOduration=16.587228087 podStartE2EDuration="23.791886858s" podCreationTimestamp="2026-02-17 14:25:56 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.940014041 +0000 UTC m=+1196.282942310" lastFinishedPulling="2026-02-17 14:26:17.144672812 +0000 UTC m=+1203.487601081" observedRunningTime="2026-02-17 14:26:19.78365839 +0000 UTC m=+1206.126586779" watchObservedRunningTime="2026-02-17 14:26:19.791886858 +0000 UTC m=+1206.134815127" Feb 17 14:26:20 crc kubenswrapper[4836]: I0217 14:26:20.753052 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerStarted","Data":"1aeb38549c5093ddcbd19fe025e8df306afcc08ba355a33bcd16537686f0d989"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.762467 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"e2c3e649-7933-49e2-800c-b66dbd377ac6","Type":"ContainerStarted","Data":"64e1582fb06d05de2483b260f089507530e9d2d49cb5a107d301863d329da8a2"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.763140 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.766015 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" event={"ID":"33c54f8c-91c4-4742-b545-d0e2c4e85fe2","Type":"ContainerStarted","Data":"03ebbb1b3ce184b45c3662c6adbec0c02ab9f4f09ca958693abcdfeffe8f9ee5"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.766154 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.768956 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"1c33fb01-9bf7-43f1-86d5-004e70d3721c","Type":"ContainerStarted","Data":"e35267da76ae00fa184a439eba6ca0d6a766d7b2cc5eb5e026aaf5d342b3a4f6"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.769145 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.771852 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ghk5k" event={"ID":"5949d44f-ef6d-417e-9035-9b235cd59863","Type":"ContainerStarted","Data":"32287f9f678dbaed07061e53120bb2d03b0a8363de0671992dceb7e7bed21aa9"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.772022 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ghk5k" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.773961 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" event={"ID":"27c5f450-8bef-4732-a7fb-272d9b5a4ea8","Type":"ContainerStarted","Data":"dde5854aa96cd113469c300d94b30c2eb9058189e0ab789d9fea33d42b96a117"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.774007 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.776244 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"d370240e-d6c1-4d9c-9877-293afa6e77f2","Type":"ContainerStarted","Data":"ba0ecd0b100eb9ea63f73bf1a5ca84603cc3ebac6a987a21846e914b41a2eb7d"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.776343 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.778049 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" event={"ID":"a977b831-7959-4509-93bf-a45b375ca722","Type":"ContainerStarted","Data":"05ad14d1ceb71d26ff8ffe5dbb522f582d3b9ff7f132b341d7172c0b5b23e36c"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.778236 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.787470 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ce3babe4-6d77-45ce-b9cc-626678d3ec64","Type":"ContainerStarted","Data":"4e19f0fa1d919c16dad83c81a615cac893a67fd86f695e784545453a405b8ac2"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.789376 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.796786 4836 generic.go:334] "Generic (PLEG): container finished" podID="cefe420d-f25c-4681-9ae8-b61f0a354282" containerID="636101e7aa1ec6e32cd2fd443d1dc53a1da78342ee17ec1c5f1e7bf1f82351d7" exitCode=0 Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.796885 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-j4jj9" event={"ID":"cefe420d-f25c-4681-9ae8-b61f0a354282","Type":"ContainerDied","Data":"636101e7aa1ec6e32cd2fd443d1dc53a1da78342ee17ec1c5f1e7bf1f82351d7"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.801595 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"87197028-3222-4c04-89a7-135997258e0d","Type":"ContainerStarted","Data":"6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.802204 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.804690 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55bc1962-7790-448a-838c-cb13a870ea23","Type":"ContainerStarted","Data":"f31dffc396a57b84b92df802b0646ee470765b8b992c8550f0246d91f5466b27"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.805141 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=18.295408192 podStartE2EDuration="25.805109883s" podCreationTimestamp="2026-02-17 14:25:56 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.9490564 +0000 UTC m=+1196.291984669" lastFinishedPulling="2026-02-17 14:26:17.458758071 +0000 UTC m=+1203.801686360" observedRunningTime="2026-02-17 14:26:21.786440259 +0000 UTC m=+1208.129368538" watchObservedRunningTime="2026-02-17 14:26:21.805109883 +0000 UTC m=+1208.148038162" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.807868 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" event={"ID":"974f66b3-690f-4008-949d-1d57c978d427","Type":"ContainerStarted","Data":"9930295418d6f14f49b1a38e8481e519c6d37978249d30a530eb887ce5e5ce4a"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.807918 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.808281 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.820903 4836 generic.go:334] "Generic (PLEG): container finished" podID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" containerID="57ea1eebc786d3a8ae12a685cfa802406deab325110c652e436a68a0c258022f" exitCode=0 Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.821076 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" event={"ID":"63d320ce-8669-4285-b4bc-dbb6eeb9a190","Type":"ContainerDied","Data":"57ea1eebc786d3a8ae12a685cfa802406deab325110c652e436a68a0c258022f"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.821522 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" podStartSLOduration=18.552252726 podStartE2EDuration="25.821501737s" podCreationTimestamp="2026-02-17 14:25:56 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.967938041 +0000 UTC m=+1196.310866310" lastFinishedPulling="2026-02-17 14:26:17.237187052 +0000 UTC m=+1203.580115321" observedRunningTime="2026-02-17 14:26:21.809698434 +0000 UTC m=+1208.152626723" watchObservedRunningTime="2026-02-17 14:26:21.821501737 +0000 UTC m=+1208.164430006" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.827043 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19","Type":"ContainerStarted","Data":"a0c6837423c83012243ded0c8254010821ac471a614b60ef2aa6c50c514ceee8"} Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.832964 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.862627 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=18.577280219 podStartE2EDuration="25.862601095s" podCreationTimestamp="2026-02-17 14:25:56 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.951404343 +0000 UTC m=+1196.294332612" lastFinishedPulling="2026-02-17 14:26:17.236725219 +0000 UTC m=+1203.579653488" observedRunningTime="2026-02-17 14:26:21.841876146 +0000 UTC m=+1208.184804425" watchObservedRunningTime="2026-02-17 14:26:21.862601095 +0000 UTC m=+1208.205529374" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.866446 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=17.75089924 podStartE2EDuration="25.866421617s" podCreationTimestamp="2026-02-17 14:25:56 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.693038109 +0000 UTC m=+1196.035966378" lastFinishedPulling="2026-02-17 14:26:17.808560486 +0000 UTC m=+1204.151488755" observedRunningTime="2026-02-17 14:26:21.859649197 +0000 UTC m=+1208.202577486" watchObservedRunningTime="2026-02-17 14:26:21.866421617 +0000 UTC m=+1208.209349886" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.911064 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nbvnf" podStartSLOduration=18.662474646 podStartE2EDuration="25.911035219s" podCreationTimestamp="2026-02-17 14:25:56 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.953066807 +0000 UTC m=+1196.295995076" lastFinishedPulling="2026-02-17 14:26:17.20162736 +0000 UTC m=+1203.544555649" observedRunningTime="2026-02-17 14:26:21.891491971 +0000 UTC m=+1208.234420250" watchObservedRunningTime="2026-02-17 14:26:21.911035219 +0000 UTC m=+1208.253963488" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.940323 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" podStartSLOduration=18.674279699 podStartE2EDuration="25.940279993s" podCreationTimestamp="2026-02-17 14:25:56 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.971998539 +0000 UTC m=+1196.314926808" lastFinishedPulling="2026-02-17 14:26:17.237998833 +0000 UTC m=+1203.580927102" observedRunningTime="2026-02-17 14:26:21.916264997 +0000 UTC m=+1208.259193286" watchObservedRunningTime="2026-02-17 14:26:21.940279993 +0000 UTC m=+1208.283208262" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.959760 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ghk5k" podStartSLOduration=27.700684741 podStartE2EDuration="35.959721638s" podCreationTimestamp="2026-02-17 14:25:46 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.072972116 +0000 UTC m=+1195.415900385" lastFinishedPulling="2026-02-17 14:26:17.332009013 +0000 UTC m=+1203.674937282" observedRunningTime="2026-02-17 14:26:21.948076429 +0000 UTC m=+1208.291004718" watchObservedRunningTime="2026-02-17 14:26:21.959721638 +0000 UTC m=+1208.302649907" Feb 17 14:26:21 crc kubenswrapper[4836]: I0217 14:26:21.985834 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.584460113 podStartE2EDuration="41.985799949s" podCreationTimestamp="2026-02-17 14:25:40 +0000 UTC" firstStartedPulling="2026-02-17 14:25:51.666832788 +0000 UTC m=+1178.009761077" lastFinishedPulling="2026-02-17 14:26:21.068172644 +0000 UTC m=+1207.411100913" observedRunningTime="2026-02-17 14:26:21.969307772 +0000 UTC m=+1208.312236041" watchObservedRunningTime="2026-02-17 14:26:21.985799949 +0000 UTC m=+1208.328728218" Feb 17 14:26:22 crc kubenswrapper[4836]: I0217 14:26:22.007599 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=28.177142979 podStartE2EDuration="39.007569046s" podCreationTimestamp="2026-02-17 14:25:43 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.095107022 +0000 UTC m=+1195.438035291" lastFinishedPulling="2026-02-17 14:26:19.925533079 +0000 UTC m=+1206.268461358" observedRunningTime="2026-02-17 14:26:21.993666657 +0000 UTC m=+1208.336594926" watchObservedRunningTime="2026-02-17 14:26:22.007569046 +0000 UTC m=+1208.350497315" Feb 17 14:26:22 crc kubenswrapper[4836]: I0217 14:26:22.095712 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-q78z5" podStartSLOduration=18.493372907 podStartE2EDuration="26.09567535s" podCreationTimestamp="2026-02-17 14:25:56 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.733420479 +0000 UTC m=+1196.076348748" lastFinishedPulling="2026-02-17 14:26:17.335722922 +0000 UTC m=+1203.678651191" observedRunningTime="2026-02-17 14:26:22.093223364 +0000 UTC m=+1208.436151653" watchObservedRunningTime="2026-02-17 14:26:22.09567535 +0000 UTC m=+1208.438603619" Feb 17 14:26:22 crc kubenswrapper[4836]: I0217 14:26:22.838522 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a6016745-1634-4eb6-afee-b98ce9ab8f56","Type":"ContainerStarted","Data":"aee74edc0c06a08e555878906493cce427efbca90aaeb3c4fe3a23355ef32693"} Feb 17 14:26:23 crc kubenswrapper[4836]: I0217 14:26:23.870226 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"039a526c-4f5a-4641-9340-b18459145569","Type":"ContainerStarted","Data":"9fc719884946b23c18eb39d431c1a3a86925f7b12eb5058327ff5297c2544b72"} Feb 17 14:26:23 crc kubenswrapper[4836]: I0217 14:26:23.879852 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" event={"ID":"63d320ce-8669-4285-b4bc-dbb6eeb9a190","Type":"ContainerStarted","Data":"d1ced8732b18e9a32bcf99bb2f034caca8afbe19ad1c6c3a49849748da69630c"} Feb 17 14:26:23 crc kubenswrapper[4836]: I0217 14:26:23.880232 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:26:23 crc kubenswrapper[4836]: I0217 14:26:23.901735 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"348d02a8-d1b2-4bd3-9f4c-9153e24a5f19","Type":"ContainerStarted","Data":"1ec68336a3c5494d166918ccd0c9bb1885725856abc7c73cfa1b9a88ce8c4dbe"} Feb 17 14:26:23 crc kubenswrapper[4836]: I0217 14:26:23.905061 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-j4jj9" event={"ID":"cefe420d-f25c-4681-9ae8-b61f0a354282","Type":"ContainerStarted","Data":"d3bd473c9d3b050f0ef16304bf8861b295846719e86a7ff11a5cbe0b0bfbab0b"} Feb 17 14:26:23 crc kubenswrapper[4836]: I0217 14:26:23.907111 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55bc1962-7790-448a-838c-cb13a870ea23","Type":"ContainerStarted","Data":"3a2f0903fa9451947c8daa39f2fd1b4f6ad75329d2c8ec14431d2d465b026a83"} Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.016669 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.155248515 podStartE2EDuration="38.016644781s" podCreationTimestamp="2026-02-17 14:25:46 +0000 UTC" firstStartedPulling="2026-02-17 14:26:08.459342422 +0000 UTC m=+1194.802270691" lastFinishedPulling="2026-02-17 14:26:23.320738688 +0000 UTC m=+1209.663666957" observedRunningTime="2026-02-17 14:26:24.000215025 +0000 UTC m=+1210.343143324" watchObservedRunningTime="2026-02-17 14:26:24.016644781 +0000 UTC m=+1210.359573060" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.033375 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" podStartSLOduration=5.119360009 podStartE2EDuration="48.033339852s" podCreationTimestamp="2026-02-17 14:25:36 +0000 UTC" firstStartedPulling="2026-02-17 14:25:38.158182537 +0000 UTC m=+1164.501110796" lastFinishedPulling="2026-02-17 14:26:21.07216236 +0000 UTC m=+1207.415090639" observedRunningTime="2026-02-17 14:26:24.023946484 +0000 UTC m=+1210.366874753" watchObservedRunningTime="2026-02-17 14:26:24.033339852 +0000 UTC m=+1210.376268121" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.043698 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.067218 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.269132058 podStartE2EDuration="35.067187469s" podCreationTimestamp="2026-02-17 14:25:49 +0000 UTC" firstStartedPulling="2026-02-17 14:26:06.510669568 +0000 UTC m=+1192.853597837" lastFinishedPulling="2026-02-17 14:26:23.308724979 +0000 UTC m=+1209.651653248" observedRunningTime="2026-02-17 14:26:24.063641085 +0000 UTC m=+1210.406569364" watchObservedRunningTime="2026-02-17 14:26:24.067187469 +0000 UTC m=+1210.410115738" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.096461 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.483652 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.524485 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.918684 4836 generic.go:334] "Generic (PLEG): container finished" podID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" containerID="7bca88336a02b6b00bc19416ffdd2164736c7a5342d72427305b5d2ff3839adf" exitCode=0 Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.918737 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" event={"ID":"e14b6d2f-85ef-4f0c-8a81-426aee02b456","Type":"ContainerDied","Data":"7bca88336a02b6b00bc19416ffdd2164736c7a5342d72427305b5d2ff3839adf"} Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.922276 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-j4jj9" event={"ID":"cefe420d-f25c-4681-9ae8-b61f0a354282","Type":"ContainerStarted","Data":"b3cdaabc9e929f938f58423846d3d6283236f41b4567b11e16979ebd00a0d473"} Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.922994 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.923021 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 14:26:24 crc kubenswrapper[4836]: I0217 14:26:24.964540 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-j4jj9" podStartSLOduration=31.295861317 podStartE2EDuration="38.964516747s" podCreationTimestamp="2026-02-17 14:25:46 +0000 UTC" firstStartedPulling="2026-02-17 14:26:09.476558746 +0000 UTC m=+1195.819487015" lastFinishedPulling="2026-02-17 14:26:17.145214176 +0000 UTC m=+1203.488142445" observedRunningTime="2026-02-17 14:26:24.961113026 +0000 UTC m=+1211.304041295" watchObservedRunningTime="2026-02-17 14:26:24.964516747 +0000 UTC m=+1211.307445006" Feb 17 14:26:25 crc kubenswrapper[4836]: I0217 14:26:25.934387 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" event={"ID":"e14b6d2f-85ef-4f0c-8a81-426aee02b456","Type":"ContainerStarted","Data":"3ea7444a593f14512dc997fa7b4dd1c0aa61dd99e0888b276623626f9d806659"} Feb 17 14:26:25 crc kubenswrapper[4836]: I0217 14:26:25.934918 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:26:25 crc kubenswrapper[4836]: I0217 14:26:25.935142 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:26:25 crc kubenswrapper[4836]: I0217 14:26:25.935573 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:26:25 crc kubenswrapper[4836]: I0217 14:26:25.957048 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" podStartSLOduration=-9223371986.897753 podStartE2EDuration="49.957022325s" podCreationTimestamp="2026-02-17 14:25:36 +0000 UTC" firstStartedPulling="2026-02-17 14:25:37.34631574 +0000 UTC m=+1163.689244009" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:25.956248765 +0000 UTC m=+1212.299177054" watchObservedRunningTime="2026-02-17 14:26:25.957022325 +0000 UTC m=+1212.299950594" Feb 17 14:26:25 crc kubenswrapper[4836]: I0217 14:26:25.976713 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.265795 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s6vqb"] Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.266105 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" containerName="dnsmasq-dns" containerID="cri-o://d1ced8732b18e9a32bcf99bb2f034caca8afbe19ad1c6c3a49849748da69630c" gracePeriod=10 Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.315710 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.320574 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hp877"] Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.328096 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.330976 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.358109 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hp877"] Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.450606 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6s7lx"] Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.453124 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.457277 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.466047 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6s7lx"] Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.468669 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.468841 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.468881 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zd4\" (UniqueName: \"kubernetes.io/projected/ae1de151-2799-49ba-839c-70e035c6f1d5-kube-api-access-98zd4\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.468931 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-config\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570012 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570067 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdsn4\" (UniqueName: \"kubernetes.io/projected/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-kube-api-access-kdsn4\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570131 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570151 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-ovs-rundir\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570183 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-ovn-rundir\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570206 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98zd4\" (UniqueName: \"kubernetes.io/projected/ae1de151-2799-49ba-839c-70e035c6f1d5-kube-api-access-98zd4\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570236 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-config\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570253 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-config\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570319 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-combined-ca-bundle\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.570342 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.571139 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.571514 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.572338 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-config\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.617505 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98zd4\" (UniqueName: \"kubernetes.io/projected/ae1de151-2799-49ba-839c-70e035c6f1d5-kube-api-access-98zd4\") pod \"dnsmasq-dns-7fd796d7df-hp877\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.651198 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.672524 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-ovn-rundir\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.672636 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-config\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.672763 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-combined-ca-bundle\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.672902 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.672993 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdsn4\" (UniqueName: \"kubernetes.io/projected/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-kube-api-access-kdsn4\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.673107 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-ovs-rundir\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.673480 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-ovs-rundir\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.673841 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-ovn-rundir\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.675184 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-config\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.680907 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-combined-ca-bundle\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.684085 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.708108 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdsn4\" (UniqueName: \"kubernetes.io/projected/bf32834e-7ae4-4e3b-b532-dd87f6a9223e-kube-api-access-kdsn4\") pod \"ovn-controller-metrics-6s7lx\" (UID: \"bf32834e-7ae4-4e3b-b532-dd87f6a9223e\") " pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.810113 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6s7lx" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.851272 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jbcz5"] Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.896898 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bpss8"] Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.899589 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.904041 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.915317 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bpss8"] Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.947157 4836 generic.go:334] "Generic (PLEG): container finished" podID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" containerID="d1ced8732b18e9a32bcf99bb2f034caca8afbe19ad1c6c3a49849748da69630c" exitCode=0 Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.947235 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" event={"ID":"63d320ce-8669-4285-b4bc-dbb6eeb9a190","Type":"ContainerDied","Data":"d1ced8732b18e9a32bcf99bb2f034caca8afbe19ad1c6c3a49849748da69630c"} Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.986946 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.987231 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.987366 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.987517 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4bn\" (UniqueName: \"kubernetes.io/projected/7e0a6937-945b-48fc-a328-6715e10ffddc-kube-api-access-tx4bn\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:26 crc kubenswrapper[4836]: I0217 14:26:26.988043 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-config\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.089988 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4bn\" (UniqueName: \"kubernetes.io/projected/7e0a6937-945b-48fc-a328-6715e10ffddc-kube-api-access-tx4bn\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.090161 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-config\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.090281 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.090415 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.090443 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.092394 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.092897 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.093560 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-config\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.093598 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.111554 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4bn\" (UniqueName: \"kubernetes.io/projected/7e0a6937-945b-48fc-a328-6715e10ffddc-kube-api-access-tx4bn\") pod \"dnsmasq-dns-86db49b7ff-bpss8\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.173269 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6s7lx"] Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.227947 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hp877"] Feb 17 14:26:27 crc kubenswrapper[4836]: W0217 14:26:27.230710 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae1de151_2799_49ba_839c_70e035c6f1d5.slice/crio-124b726413d3b60c95f594f724c67ccaf14d05521a568a0d9ac52ffab7dc6d70 WatchSource:0}: Error finding container 124b726413d3b60c95f594f724c67ccaf14d05521a568a0d9ac52ffab7dc6d70: Status 404 returned error can't find the container with id 124b726413d3b60c95f594f724c67ccaf14d05521a568a0d9ac52ffab7dc6d70 Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.237950 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:27 crc kubenswrapper[4836]: I0217 14:26:27.795264 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bpss8"] Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:27.999810 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" event={"ID":"7e0a6937-945b-48fc-a328-6715e10ffddc","Type":"ContainerStarted","Data":"ff5ecfc3d719da4b799fbc70b95c4645eaf91702ed47ae7bfdec7b990d4e151b"} Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.036189 4836 generic.go:334] "Generic (PLEG): container finished" podID="a6016745-1634-4eb6-afee-b98ce9ab8f56" containerID="aee74edc0c06a08e555878906493cce427efbca90aaeb3c4fe3a23355ef32693" exitCode=0 Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.036329 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a6016745-1634-4eb6-afee-b98ce9ab8f56","Type":"ContainerDied","Data":"aee74edc0c06a08e555878906493cce427efbca90aaeb3c4fe3a23355ef32693"} Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.065584 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6s7lx" event={"ID":"bf32834e-7ae4-4e3b-b532-dd87f6a9223e","Type":"ContainerStarted","Data":"0f274f14a3714e4c0d22bf75af7b6f378d54a8f80ed9352c143e7eeb68adab24"} Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.090895 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" event={"ID":"ae1de151-2799-49ba-839c-70e035c6f1d5","Type":"ContainerStarted","Data":"124b726413d3b60c95f594f724c67ccaf14d05521a568a0d9ac52ffab7dc6d70"} Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.117753 4836 generic.go:334] "Generic (PLEG): container finished" podID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerID="1aeb38549c5093ddcbd19fe025e8df306afcc08ba355a33bcd16537686f0d989" exitCode=0 Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.118244 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerDied","Data":"1aeb38549c5093ddcbd19fe025e8df306afcc08ba355a33bcd16537686f0d989"} Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.118740 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" containerName="dnsmasq-dns" containerID="cri-o://3ea7444a593f14512dc997fa7b4dd1c0aa61dd99e0888b276623626f9d806659" gracePeriod=10 Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.523335 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.701206 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.706308 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.713578 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ksw98" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.713888 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.714034 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.714199 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.740930 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.758863 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f031114-b776-4180-ab6e-eb5868f34d3e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.758942 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h66n\" (UniqueName: \"kubernetes.io/projected/0f031114-b776-4180-ab6e-eb5868f34d3e-kube-api-access-6h66n\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.758964 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.759023 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f031114-b776-4180-ab6e-eb5868f34d3e-config\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.759061 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.759110 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f031114-b776-4180-ab6e-eb5868f34d3e-scripts\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.759128 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.860519 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f031114-b776-4180-ab6e-eb5868f34d3e-config\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.860612 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.860664 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f031114-b776-4180-ab6e-eb5868f34d3e-scripts\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.860685 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.860717 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f031114-b776-4180-ab6e-eb5868f34d3e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.860769 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h66n\" (UniqueName: \"kubernetes.io/projected/0f031114-b776-4180-ab6e-eb5868f34d3e-kube-api-access-6h66n\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.860791 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.867550 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f031114-b776-4180-ab6e-eb5868f34d3e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.871699 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.872775 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.872930 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f031114-b776-4180-ab6e-eb5868f34d3e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.894769 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h66n\" (UniqueName: \"kubernetes.io/projected/0f031114-b776-4180-ab6e-eb5868f34d3e-kube-api-access-6h66n\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.898507 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f031114-b776-4180-ab6e-eb5868f34d3e-config\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:28 crc kubenswrapper[4836]: I0217 14:26:28.898527 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f031114-b776-4180-ab6e-eb5868f34d3e-scripts\") pod \"ovn-northd-0\" (UID: \"0f031114-b776-4180-ab6e-eb5868f34d3e\") " pod="openstack/ovn-northd-0" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.038438 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.137984 4836 generic.go:334] "Generic (PLEG): container finished" podID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" containerID="3ea7444a593f14512dc997fa7b4dd1c0aa61dd99e0888b276623626f9d806659" exitCode=0 Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.138062 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" event={"ID":"e14b6d2f-85ef-4f0c-8a81-426aee02b456","Type":"ContainerDied","Data":"3ea7444a593f14512dc997fa7b4dd1c0aa61dd99e0888b276623626f9d806659"} Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.144458 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" event={"ID":"63d320ce-8669-4285-b4bc-dbb6eeb9a190","Type":"ContainerDied","Data":"0ccc818ba3aecccefe49bbab270ac8d64079fabeee863e8305c62599ebffa6de"} Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.144553 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ccc818ba3aecccefe49bbab270ac8d64079fabeee863e8305c62599ebffa6de" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.197940 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.374144 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-config\") pod \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.374385 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-dns-svc\") pod \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.374468 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ntrv\" (UniqueName: \"kubernetes.io/projected/63d320ce-8669-4285-b4bc-dbb6eeb9a190-kube-api-access-8ntrv\") pod \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\" (UID: \"63d320ce-8669-4285-b4bc-dbb6eeb9a190\") " Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.382937 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d320ce-8669-4285-b4bc-dbb6eeb9a190-kube-api-access-8ntrv" (OuterVolumeSpecName: "kube-api-access-8ntrv") pod "63d320ce-8669-4285-b4bc-dbb6eeb9a190" (UID: "63d320ce-8669-4285-b4bc-dbb6eeb9a190"). InnerVolumeSpecName "kube-api-access-8ntrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.436281 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63d320ce-8669-4285-b4bc-dbb6eeb9a190" (UID: "63d320ce-8669-4285-b4bc-dbb6eeb9a190"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.444965 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-config" (OuterVolumeSpecName: "config") pod "63d320ce-8669-4285-b4bc-dbb6eeb9a190" (UID: "63d320ce-8669-4285-b4bc-dbb6eeb9a190"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.479498 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ntrv\" (UniqueName: \"kubernetes.io/projected/63d320ce-8669-4285-b4bc-dbb6eeb9a190-kube-api-access-8ntrv\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.479549 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.479560 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63d320ce-8669-4285-b4bc-dbb6eeb9a190-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.623574 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.623633 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.698858 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.765408 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.765505 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.777071 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.800525 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.894707 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-dns-svc\") pod \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.894767 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zckh\" (UniqueName: \"kubernetes.io/projected/e14b6d2f-85ef-4f0c-8a81-426aee02b456-kube-api-access-4zckh\") pod \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.894803 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-config\") pod \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\" (UID: \"e14b6d2f-85ef-4f0c-8a81-426aee02b456\") " Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.903519 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14b6d2f-85ef-4f0c-8a81-426aee02b456-kube-api-access-4zckh" (OuterVolumeSpecName: "kube-api-access-4zckh") pod "e14b6d2f-85ef-4f0c-8a81-426aee02b456" (UID: "e14b6d2f-85ef-4f0c-8a81-426aee02b456"). InnerVolumeSpecName "kube-api-access-4zckh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.938790 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e14b6d2f-85ef-4f0c-8a81-426aee02b456" (UID: "e14b6d2f-85ef-4f0c-8a81-426aee02b456"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.944516 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-config" (OuterVolumeSpecName: "config") pod "e14b6d2f-85ef-4f0c-8a81-426aee02b456" (UID: "e14b6d2f-85ef-4f0c-8a81-426aee02b456"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.996826 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.996867 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zckh\" (UniqueName: \"kubernetes.io/projected/e14b6d2f-85ef-4f0c-8a81-426aee02b456-kube-api-access-4zckh\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:29 crc kubenswrapper[4836]: I0217 14:26:29.996883 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e14b6d2f-85ef-4f0c-8a81-426aee02b456-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.160227 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a6016745-1634-4eb6-afee-b98ce9ab8f56","Type":"ContainerStarted","Data":"d4d85ba381cf0b62a0a0175503f952a077c1eb634e3436f3651878883d6540f2"} Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.163353 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6s7lx" event={"ID":"bf32834e-7ae4-4e3b-b532-dd87f6a9223e","Type":"ContainerStarted","Data":"476655d8bcfbe2366d26089d77df2c41aa8693d47f6e231f7ca2793e699a4216"} Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.164843 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0f031114-b776-4180-ab6e-eb5868f34d3e","Type":"ContainerStarted","Data":"0770d15fb85402ea5503965c244932ef8b8f57b07f17801eaf1cf0d20cc68dca"} Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.166396 4836 generic.go:334] "Generic (PLEG): container finished" podID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerID="6238b015658e3e1a044d71695f4d830d8dd3bda46833739bdcd6ad73a556976d" exitCode=0 Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.166472 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" event={"ID":"ae1de151-2799-49ba-839c-70e035c6f1d5","Type":"ContainerDied","Data":"6238b015658e3e1a044d71695f4d830d8dd3bda46833739bdcd6ad73a556976d"} Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.170087 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" event={"ID":"e14b6d2f-85ef-4f0c-8a81-426aee02b456","Type":"ContainerDied","Data":"4be2faa5279826c8447da22307f09f3ad1d1675b115d7c7c5cab72070952c1fe"} Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.170144 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jbcz5" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.170152 4836 scope.go:117] "RemoveContainer" containerID="3ea7444a593f14512dc997fa7b4dd1c0aa61dd99e0888b276623626f9d806659" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.173174 4836 generic.go:334] "Generic (PLEG): container finished" podID="7e0a6937-945b-48fc-a328-6715e10ffddc" containerID="5d73acc7d3b7d21dfd57bd1f5f6891bf754918c51d20232beb9b0071a1de3710" exitCode=0 Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.173267 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s6vqb" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.173395 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" event={"ID":"7e0a6937-945b-48fc-a328-6715e10ffddc","Type":"ContainerDied","Data":"5d73acc7d3b7d21dfd57bd1f5f6891bf754918c51d20232beb9b0071a1de3710"} Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.202067 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371985.652733 podStartE2EDuration="51.202042435s" podCreationTimestamp="2026-02-17 14:25:39 +0000 UTC" firstStartedPulling="2026-02-17 14:25:50.943527059 +0000 UTC m=+1177.286455328" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:30.191335961 +0000 UTC m=+1216.534264240" watchObservedRunningTime="2026-02-17 14:26:30.202042435 +0000 UTC m=+1216.544970704" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.211037 4836 scope.go:117] "RemoveContainer" containerID="7bca88336a02b6b00bc19416ffdd2164736c7a5342d72427305b5d2ff3839adf" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.304324 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6s7lx" podStartSLOduration=4.304280602 podStartE2EDuration="4.304280602s" podCreationTimestamp="2026-02-17 14:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:30.285507986 +0000 UTC m=+1216.628436255" watchObservedRunningTime="2026-02-17 14:26:30.304280602 +0000 UTC m=+1216.647208881" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.334419 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jbcz5"] Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.352152 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jbcz5"] Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.372501 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s6vqb"] Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.372704 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.391897 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s6vqb"] Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.587814 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" path="/var/lib/kubelet/pods/63d320ce-8669-4285-b4bc-dbb6eeb9a190/volumes" Feb 17 14:26:30 crc kubenswrapper[4836]: I0217 14:26:30.588935 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" path="/var/lib/kubelet/pods/e14b6d2f-85ef-4f0c-8a81-426aee02b456/volumes" Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.138439 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.138516 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.198169 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" event={"ID":"7e0a6937-945b-48fc-a328-6715e10ffddc","Type":"ContainerStarted","Data":"a4dd5c55405656df129bfcb6d3d7edde886d28467877f4823411944db38277ef"} Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.199421 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.205470 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" event={"ID":"ae1de151-2799-49ba-839c-70e035c6f1d5","Type":"ContainerStarted","Data":"9e306714806082c8f26efe22ad79196500631adf5249bfbc7b5f3c70a80f192f"} Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.206853 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.209010 4836 generic.go:334] "Generic (PLEG): container finished" podID="039a526c-4f5a-4641-9340-b18459145569" containerID="9fc719884946b23c18eb39d431c1a3a86925f7b12eb5058327ff5297c2544b72" exitCode=0 Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.209087 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"039a526c-4f5a-4641-9340-b18459145569","Type":"ContainerDied","Data":"9fc719884946b23c18eb39d431c1a3a86925f7b12eb5058327ff5297c2544b72"} Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.222252 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" podStartSLOduration=5.222229577 podStartE2EDuration="5.222229577s" podCreationTimestamp="2026-02-17 14:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:31.220080559 +0000 UTC m=+1217.563008828" watchObservedRunningTime="2026-02-17 14:26:31.222229577 +0000 UTC m=+1217.565157846" Feb 17 14:26:31 crc kubenswrapper[4836]: I0217 14:26:31.288651 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" podStartSLOduration=5.288626045 podStartE2EDuration="5.288626045s" podCreationTimestamp="2026-02-17 14:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:31.2748636 +0000 UTC m=+1217.617791889" watchObservedRunningTime="2026-02-17 14:26:31.288626045 +0000 UTC m=+1217.631554314" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.227606 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0f031114-b776-4180-ab6e-eb5868f34d3e","Type":"ContainerStarted","Data":"5306b2ee0e0512bed9154941beda2bb67a18a17518b1967232d0c4bb2e53b785"} Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.227971 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0f031114-b776-4180-ab6e-eb5868f34d3e","Type":"ContainerStarted","Data":"1655c3da47c805f9ddc21bc36e579de15f15d66e96becd5ec0544bc750bfe3ed"} Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.228090 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.260077 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.958174922 podStartE2EDuration="4.260057966s" podCreationTimestamp="2026-02-17 14:26:28 +0000 UTC" firstStartedPulling="2026-02-17 14:26:29.703217592 +0000 UTC m=+1216.046145861" lastFinishedPulling="2026-02-17 14:26:31.005100636 +0000 UTC m=+1217.348028905" observedRunningTime="2026-02-17 14:26:32.252127715 +0000 UTC m=+1218.595055984" watchObservedRunningTime="2026-02-17 14:26:32.260057966 +0000 UTC m=+1218.602986235" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.385915 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d8f3-account-create-update-kmlvm"] Feb 17 14:26:32 crc kubenswrapper[4836]: E0217 14:26:32.386282 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" containerName="dnsmasq-dns" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.386309 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" containerName="dnsmasq-dns" Feb 17 14:26:32 crc kubenswrapper[4836]: E0217 14:26:32.386331 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" containerName="init" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.386338 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" containerName="init" Feb 17 14:26:32 crc kubenswrapper[4836]: E0217 14:26:32.386352 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" containerName="init" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.386358 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" containerName="init" Feb 17 14:26:32 crc kubenswrapper[4836]: E0217 14:26:32.386375 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" containerName="dnsmasq-dns" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.386380 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" containerName="dnsmasq-dns" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.386560 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14b6d2f-85ef-4f0c-8a81-426aee02b456" containerName="dnsmasq-dns" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.386582 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d320ce-8669-4285-b4bc-dbb6eeb9a190" containerName="dnsmasq-dns" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.387506 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.389792 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.425241 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8f3-account-create-update-kmlvm"] Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.456053 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-k7zc9"] Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.458379 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.484619 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1659d-7892-4744-a570-4ba7c65e4caf-operator-scripts\") pod \"keystone-d8f3-account-create-update-kmlvm\" (UID: \"2ae1659d-7892-4744-a570-4ba7c65e4caf\") " pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.484802 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ljc\" (UniqueName: \"kubernetes.io/projected/2ae1659d-7892-4744-a570-4ba7c65e4caf-kube-api-access-g9ljc\") pod \"keystone-d8f3-account-create-update-kmlvm\" (UID: \"2ae1659d-7892-4744-a570-4ba7c65e4caf\") " pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.485602 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-k7zc9"] Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.587025 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stl2m\" (UniqueName: \"kubernetes.io/projected/e562d506-21d2-4edd-90b8-97bd11bf068e-kube-api-access-stl2m\") pod \"keystone-db-create-k7zc9\" (UID: \"e562d506-21d2-4edd-90b8-97bd11bf068e\") " pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.587109 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1659d-7892-4744-a570-4ba7c65e4caf-operator-scripts\") pod \"keystone-d8f3-account-create-update-kmlvm\" (UID: \"2ae1659d-7892-4744-a570-4ba7c65e4caf\") " pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.587162 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e562d506-21d2-4edd-90b8-97bd11bf068e-operator-scripts\") pod \"keystone-db-create-k7zc9\" (UID: \"e562d506-21d2-4edd-90b8-97bd11bf068e\") " pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.587225 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9ljc\" (UniqueName: \"kubernetes.io/projected/2ae1659d-7892-4744-a570-4ba7c65e4caf-kube-api-access-g9ljc\") pod \"keystone-d8f3-account-create-update-kmlvm\" (UID: \"2ae1659d-7892-4744-a570-4ba7c65e4caf\") " pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.588348 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1659d-7892-4744-a570-4ba7c65e4caf-operator-scripts\") pod \"keystone-d8f3-account-create-update-kmlvm\" (UID: \"2ae1659d-7892-4744-a570-4ba7c65e4caf\") " pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.619578 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9ljc\" (UniqueName: \"kubernetes.io/projected/2ae1659d-7892-4744-a570-4ba7c65e4caf-kube-api-access-g9ljc\") pod \"keystone-d8f3-account-create-update-kmlvm\" (UID: \"2ae1659d-7892-4744-a570-4ba7c65e4caf\") " pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.648360 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hx7tv"] Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.651007 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.665195 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hx7tv"] Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.696654 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e562d506-21d2-4edd-90b8-97bd11bf068e-operator-scripts\") pod \"keystone-db-create-k7zc9\" (UID: \"e562d506-21d2-4edd-90b8-97bd11bf068e\") " pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.697280 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stl2m\" (UniqueName: \"kubernetes.io/projected/e562d506-21d2-4edd-90b8-97bd11bf068e-kube-api-access-stl2m\") pod \"keystone-db-create-k7zc9\" (UID: \"e562d506-21d2-4edd-90b8-97bd11bf068e\") " pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.697978 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e562d506-21d2-4edd-90b8-97bd11bf068e-operator-scripts\") pod \"keystone-db-create-k7zc9\" (UID: \"e562d506-21d2-4edd-90b8-97bd11bf068e\") " pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.722603 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.722826 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stl2m\" (UniqueName: \"kubernetes.io/projected/e562d506-21d2-4edd-90b8-97bd11bf068e-kube-api-access-stl2m\") pod \"keystone-db-create-k7zc9\" (UID: \"e562d506-21d2-4edd-90b8-97bd11bf068e\") " pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.778387 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-83de-account-create-update-fh75b"] Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.780244 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.784626 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.786495 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.808619 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-operator-scripts\") pod \"placement-db-create-hx7tv\" (UID: \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\") " pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.808900 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfzfp\" (UniqueName: \"kubernetes.io/projected/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-kube-api-access-sfzfp\") pod \"placement-db-create-hx7tv\" (UID: \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\") " pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.824101 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-83de-account-create-update-fh75b"] Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.914437 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54905e17-d443-4465-8f70-7be04a89086f-operator-scripts\") pod \"placement-83de-account-create-update-fh75b\" (UID: \"54905e17-d443-4465-8f70-7be04a89086f\") " pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.915008 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-operator-scripts\") pod \"placement-db-create-hx7tv\" (UID: \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\") " pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.915113 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8nc5\" (UniqueName: \"kubernetes.io/projected/54905e17-d443-4465-8f70-7be04a89086f-kube-api-access-r8nc5\") pod \"placement-83de-account-create-update-fh75b\" (UID: \"54905e17-d443-4465-8f70-7be04a89086f\") " pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.915176 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfzfp\" (UniqueName: \"kubernetes.io/projected/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-kube-api-access-sfzfp\") pod \"placement-db-create-hx7tv\" (UID: \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\") " pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.916904 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-operator-scripts\") pod \"placement-db-create-hx7tv\" (UID: \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\") " pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:32 crc kubenswrapper[4836]: I0217 14:26:32.940993 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfzfp\" (UniqueName: \"kubernetes.io/projected/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-kube-api-access-sfzfp\") pod \"placement-db-create-hx7tv\" (UID: \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\") " pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.007820 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.020347 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54905e17-d443-4465-8f70-7be04a89086f-operator-scripts\") pod \"placement-83de-account-create-update-fh75b\" (UID: \"54905e17-d443-4465-8f70-7be04a89086f\") " pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.021098 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8nc5\" (UniqueName: \"kubernetes.io/projected/54905e17-d443-4465-8f70-7be04a89086f-kube-api-access-r8nc5\") pod \"placement-83de-account-create-update-fh75b\" (UID: \"54905e17-d443-4465-8f70-7be04a89086f\") " pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.021113 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54905e17-d443-4465-8f70-7be04a89086f-operator-scripts\") pod \"placement-83de-account-create-update-fh75b\" (UID: \"54905e17-d443-4465-8f70-7be04a89086f\") " pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.046260 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8nc5\" (UniqueName: \"kubernetes.io/projected/54905e17-d443-4465-8f70-7be04a89086f-kube-api-access-r8nc5\") pod \"placement-83de-account-create-update-fh75b\" (UID: \"54905e17-d443-4465-8f70-7be04a89086f\") " pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.229887 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.324831 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8f3-account-create-update-kmlvm"] Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.470776 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-k7zc9"] Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.574362 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hx7tv"] Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.881451 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 14:26:33 crc kubenswrapper[4836]: I0217 14:26:33.894750 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-83de-account-create-update-fh75b"] Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.266509 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8f3-account-create-update-kmlvm" event={"ID":"2ae1659d-7892-4744-a570-4ba7c65e4caf","Type":"ContainerStarted","Data":"053f72cbdf2c7e5f30db435af69e5bbe1df08b8271492028d870e534720e3fc6"} Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.344405 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hp877"] Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.350869 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" podUID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerName="dnsmasq-dns" containerID="cri-o://9e306714806082c8f26efe22ad79196500631adf5249bfbc7b5f3c70a80f192f" gracePeriod=10 Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.389065 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-wbh2w"] Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.391735 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.403998 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wbh2w"] Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.471873 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-dns-svc\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.471968 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-config\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.472218 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.472265 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76vb5\" (UniqueName: \"kubernetes.io/projected/312259c2-4f8f-401d-a19e-64d0bc7dd35f-kube-api-access-76vb5\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.472319 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.585636 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-dns-svc\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.586178 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-config\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.586420 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.586481 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76vb5\" (UniqueName: \"kubernetes.io/projected/312259c2-4f8f-401d-a19e-64d0bc7dd35f-kube-api-access-76vb5\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.586522 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.587286 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-dns-svc\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.587752 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.589058 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.589287 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-config\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.618084 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76vb5\" (UniqueName: \"kubernetes.io/projected/312259c2-4f8f-401d-a19e-64d0bc7dd35f-kube-api-access-76vb5\") pod \"dnsmasq-dns-698758b865-wbh2w\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:34 crc kubenswrapper[4836]: I0217 14:26:34.745490 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.294597 4836 generic.go:334] "Generic (PLEG): container finished" podID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerID="9e306714806082c8f26efe22ad79196500631adf5249bfbc7b5f3c70a80f192f" exitCode=0 Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.294673 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" event={"ID":"ae1de151-2799-49ba-839c-70e035c6f1d5","Type":"ContainerDied","Data":"9e306714806082c8f26efe22ad79196500631adf5249bfbc7b5f3c70a80f192f"} Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.514358 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.521684 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.525361 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.525365 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.525515 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.527729 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-g6scn" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.549555 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.616450 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqdzq\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-kube-api-access-pqdzq\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.616618 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d825549-7bd9-4e47-a4b1-bd74526d0dee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d825549-7bd9-4e47-a4b1-bd74526d0dee\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.616675 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e482046c-502a-4f41-b013-7b3ef1c71ee1-cache\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.616750 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e482046c-502a-4f41-b013-7b3ef1c71ee1-lock\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.616815 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.616874 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e482046c-502a-4f41-b013-7b3ef1c71ee1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.719917 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.720014 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e482046c-502a-4f41-b013-7b3ef1c71ee1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.720130 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqdzq\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-kube-api-access-pqdzq\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.720216 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d825549-7bd9-4e47-a4b1-bd74526d0dee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d825549-7bd9-4e47-a4b1-bd74526d0dee\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.720269 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e482046c-502a-4f41-b013-7b3ef1c71ee1-cache\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: E0217 14:26:35.720331 4836 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 14:26:35 crc kubenswrapper[4836]: E0217 14:26:35.720372 4836 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 14:26:35 crc kubenswrapper[4836]: E0217 14:26:35.720454 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift podName:e482046c-502a-4f41-b013-7b3ef1c71ee1 nodeName:}" failed. No retries permitted until 2026-02-17 14:26:36.220423801 +0000 UTC m=+1222.563352140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift") pod "swift-storage-0" (UID: "e482046c-502a-4f41-b013-7b3ef1c71ee1") : configmap "swift-ring-files" not found Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.720350 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e482046c-502a-4f41-b013-7b3ef1c71ee1-lock\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.720954 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e482046c-502a-4f41-b013-7b3ef1c71ee1-cache\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.721520 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e482046c-502a-4f41-b013-7b3ef1c71ee1-lock\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.725072 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.725122 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d825549-7bd9-4e47-a4b1-bd74526d0dee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d825549-7bd9-4e47-a4b1-bd74526d0dee\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3fb15f4e3277f1f113896c526bb3ebf7a54f83f6fad85785ce0d01aa07563fdc/globalmount\"" pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.734289 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e482046c-502a-4f41-b013-7b3ef1c71ee1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.746277 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqdzq\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-kube-api-access-pqdzq\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:35 crc kubenswrapper[4836]: I0217 14:26:35.775895 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d825549-7bd9-4e47-a4b1-bd74526d0dee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d825549-7bd9-4e47-a4b1-bd74526d0dee\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.233551 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:36 crc kubenswrapper[4836]: E0217 14:26:36.234193 4836 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 14:26:36 crc kubenswrapper[4836]: E0217 14:26:36.234212 4836 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 14:26:36 crc kubenswrapper[4836]: E0217 14:26:36.234269 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift podName:e482046c-502a-4f41-b013-7b3ef1c71ee1 nodeName:}" failed. No retries permitted until 2026-02-17 14:26:37.234254191 +0000 UTC m=+1223.577182450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift") pod "swift-storage-0" (UID: "e482046c-502a-4f41-b013-7b3ef1c71ee1") : configmap "swift-ring-files" not found Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.560776 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dbzmx"] Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.562048 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.563812 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.564559 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.568669 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.599853 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dbzmx"] Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.641060 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb33695b-c451-44b2-8a2a-fe534a4040e3-etc-swift\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.641175 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-scripts\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.641219 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-combined-ca-bundle\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.641310 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-dispersionconf\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.641411 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-ring-data-devices\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.641454 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-swiftconf\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.641542 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxn6\" (UniqueName: \"kubernetes.io/projected/cb33695b-c451-44b2-8a2a-fe534a4040e3-kube-api-access-tlxn6\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.656611 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pn587"] Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.658290 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pn587" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.675024 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pn587"] Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.687846 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d162-account-create-update-khb5j"] Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.704020 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.708897 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.733883 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d162-account-create-update-khb5j"] Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.750868 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb33695b-c451-44b2-8a2a-fe534a4040e3-etc-swift\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751040 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-scripts\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751077 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96b57\" (UniqueName: \"kubernetes.io/projected/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-kube-api-access-96b57\") pod \"glance-db-create-pn587\" (UID: \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\") " pod="openstack/glance-db-create-pn587" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751117 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-combined-ca-bundle\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751200 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-dispersionconf\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751256 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-operator-scripts\") pod \"glance-db-create-pn587\" (UID: \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\") " pod="openstack/glance-db-create-pn587" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751348 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-ring-data-devices\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751412 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-swiftconf\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751448 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sfbb\" (UniqueName: \"kubernetes.io/projected/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-kube-api-access-8sfbb\") pod \"glance-d162-account-create-update-khb5j\" (UID: \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\") " pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751558 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlxn6\" (UniqueName: \"kubernetes.io/projected/cb33695b-c451-44b2-8a2a-fe534a4040e3-kube-api-access-tlxn6\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.751641 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-operator-scripts\") pod \"glance-d162-account-create-update-khb5j\" (UID: \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\") " pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.753072 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb33695b-c451-44b2-8a2a-fe534a4040e3-etc-swift\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.753807 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-ring-data-devices\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.756201 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-scripts\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.761553 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-dispersionconf\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.764159 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-combined-ca-bundle\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.774325 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-swiftconf\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.780204 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlxn6\" (UniqueName: \"kubernetes.io/projected/cb33695b-c451-44b2-8a2a-fe534a4040e3-kube-api-access-tlxn6\") pod \"swift-ring-rebalance-dbzmx\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.853218 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sfbb\" (UniqueName: \"kubernetes.io/projected/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-kube-api-access-8sfbb\") pod \"glance-d162-account-create-update-khb5j\" (UID: \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\") " pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.853589 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-operator-scripts\") pod \"glance-d162-account-create-update-khb5j\" (UID: \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\") " pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.853806 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96b57\" (UniqueName: \"kubernetes.io/projected/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-kube-api-access-96b57\") pod \"glance-db-create-pn587\" (UID: \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\") " pod="openstack/glance-db-create-pn587" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.853954 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-operator-scripts\") pod \"glance-db-create-pn587\" (UID: \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\") " pod="openstack/glance-db-create-pn587" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.854551 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-operator-scripts\") pod \"glance-d162-account-create-update-khb5j\" (UID: \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\") " pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.854763 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-operator-scripts\") pod \"glance-db-create-pn587\" (UID: \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\") " pod="openstack/glance-db-create-pn587" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.873913 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sfbb\" (UniqueName: \"kubernetes.io/projected/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-kube-api-access-8sfbb\") pod \"glance-d162-account-create-update-khb5j\" (UID: \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\") " pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.879535 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96b57\" (UniqueName: \"kubernetes.io/projected/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-kube-api-access-96b57\") pod \"glance-db-create-pn587\" (UID: \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\") " pod="openstack/glance-db-create-pn587" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.893132 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.954213 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-r4gdh" Feb 17 14:26:36 crc kubenswrapper[4836]: I0217 14:26:36.989113 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pn587" Feb 17 14:26:37 crc kubenswrapper[4836]: I0217 14:26:37.050772 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:37 crc kubenswrapper[4836]: I0217 14:26:37.087087 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j" Feb 17 14:26:37 crc kubenswrapper[4836]: I0217 14:26:37.185244 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-fsq2h" Feb 17 14:26:37 crc kubenswrapper[4836]: I0217 14:26:37.241034 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:37 crc kubenswrapper[4836]: I0217 14:26:37.261390 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:37 crc kubenswrapper[4836]: E0217 14:26:37.261755 4836 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 14:26:37 crc kubenswrapper[4836]: E0217 14:26:37.261785 4836 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 14:26:37 crc kubenswrapper[4836]: E0217 14:26:37.261854 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift podName:e482046c-502a-4f41-b013-7b3ef1c71ee1 nodeName:}" failed. No retries permitted until 2026-02-17 14:26:39.261833318 +0000 UTC m=+1225.604761587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift") pod "swift-storage-0" (UID: "e482046c-502a-4f41-b013-7b3ef1c71ee1") : configmap "swift-ring-files" not found Feb 17 14:26:37 crc kubenswrapper[4836]: I0217 14:26:37.465909 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 14:26:37 crc kubenswrapper[4836]: I0217 14:26:37.588694 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 14:26:37 crc kubenswrapper[4836]: I0217 14:26:37.828338 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="1c33fb01-9bf7-43f1-86d5-004e70d3721c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.002249 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.172641 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.177121 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-c8vxs"] Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.179479 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.184154 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.187941 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c8vxs"] Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.284352 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86lz\" (UniqueName: \"kubernetes.io/projected/21d02a34-d68b-4cae-9f03-0b15d07fe948-kube-api-access-f86lz\") pod \"root-account-create-update-c8vxs\" (UID: \"21d02a34-d68b-4cae-9f03-0b15d07fe948\") " pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.284507 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d02a34-d68b-4cae-9f03-0b15d07fe948-operator-scripts\") pod \"root-account-create-update-c8vxs\" (UID: \"21d02a34-d68b-4cae-9f03-0b15d07fe948\") " pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.385982 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f86lz\" (UniqueName: \"kubernetes.io/projected/21d02a34-d68b-4cae-9f03-0b15d07fe948-kube-api-access-f86lz\") pod \"root-account-create-update-c8vxs\" (UID: \"21d02a34-d68b-4cae-9f03-0b15d07fe948\") " pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.386138 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d02a34-d68b-4cae-9f03-0b15d07fe948-operator-scripts\") pod \"root-account-create-update-c8vxs\" (UID: \"21d02a34-d68b-4cae-9f03-0b15d07fe948\") " pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.387853 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d02a34-d68b-4cae-9f03-0b15d07fe948-operator-scripts\") pod \"root-account-create-update-c8vxs\" (UID: \"21d02a34-d68b-4cae-9f03-0b15d07fe948\") " pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.414012 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f86lz\" (UniqueName: \"kubernetes.io/projected/21d02a34-d68b-4cae-9f03-0b15d07fe948-kube-api-access-f86lz\") pod \"root-account-create-update-c8vxs\" (UID: \"21d02a34-d68b-4cae-9f03-0b15d07fe948\") " pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:38 crc kubenswrapper[4836]: I0217 14:26:38.543623 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.309707 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:39 crc kubenswrapper[4836]: E0217 14:26:39.310387 4836 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 14:26:39 crc kubenswrapper[4836]: E0217 14:26:39.310411 4836 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 14:26:39 crc kubenswrapper[4836]: E0217 14:26:39.310471 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift podName:e482046c-502a-4f41-b013-7b3ef1c71ee1 nodeName:}" failed. No retries permitted until 2026-02-17 14:26:43.310452521 +0000 UTC m=+1229.653380790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift") pod "swift-storage-0" (UID: "e482046c-502a-4f41-b013-7b3ef1c71ee1") : configmap "swift-ring-files" not found Feb 17 14:26:39 crc kubenswrapper[4836]: W0217 14:26:39.431082 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54905e17_d443_4465_8f70_7be04a89086f.slice/crio-944bbce44653f332757b41aa709ff915e239c63242463d545f1454d580e8215e WatchSource:0}: Error finding container 944bbce44653f332757b41aa709ff915e239c63242463d545f1454d580e8215e: Status 404 returned error can't find the container with id 944bbce44653f332757b41aa709ff915e239c63242463d545f1454d580e8215e Feb 17 14:26:39 crc kubenswrapper[4836]: W0217 14:26:39.434869 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode562d506_21d2_4edd_90b8_97bd11bf068e.slice/crio-7bdbd8741734a15b799a8b486b726df845fdab465fc49bc23c59d7482c73e7d5 WatchSource:0}: Error finding container 7bdbd8741734a15b799a8b486b726df845fdab465fc49bc23c59d7482c73e7d5: Status 404 returned error can't find the container with id 7bdbd8741734a15b799a8b486b726df845fdab465fc49bc23c59d7482c73e7d5 Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.671603 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.729247 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-dns-svc\") pod \"ae1de151-2799-49ba-839c-70e035c6f1d5\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.731854 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98zd4\" (UniqueName: \"kubernetes.io/projected/ae1de151-2799-49ba-839c-70e035c6f1d5-kube-api-access-98zd4\") pod \"ae1de151-2799-49ba-839c-70e035c6f1d5\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.731960 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-config\") pod \"ae1de151-2799-49ba-839c-70e035c6f1d5\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.732272 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-ovsdbserver-nb\") pod \"ae1de151-2799-49ba-839c-70e035c6f1d5\" (UID: \"ae1de151-2799-49ba-839c-70e035c6f1d5\") " Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.736640 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1de151-2799-49ba-839c-70e035c6f1d5-kube-api-access-98zd4" (OuterVolumeSpecName: "kube-api-access-98zd4") pod "ae1de151-2799-49ba-839c-70e035c6f1d5" (UID: "ae1de151-2799-49ba-839c-70e035c6f1d5"). InnerVolumeSpecName "kube-api-access-98zd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.803973 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae1de151-2799-49ba-839c-70e035c6f1d5" (UID: "ae1de151-2799-49ba-839c-70e035c6f1d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.811534 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae1de151-2799-49ba-839c-70e035c6f1d5" (UID: "ae1de151-2799-49ba-839c-70e035c6f1d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.820136 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-config" (OuterVolumeSpecName: "config") pod "ae1de151-2799-49ba-839c-70e035c6f1d5" (UID: "ae1de151-2799-49ba-839c-70e035c6f1d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.834997 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.835035 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.835046 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98zd4\" (UniqueName: \"kubernetes.io/projected/ae1de151-2799-49ba-839c-70e035c6f1d5-kube-api-access-98zd4\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:39 crc kubenswrapper[4836]: I0217 14:26:39.835056 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1de151-2799-49ba-839c-70e035c6f1d5-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.204190 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pn587"] Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.394628 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hx7tv" event={"ID":"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b","Type":"ContainerStarted","Data":"8f88022ab4daa99006c48416f95fa6fcf0ec231af3f8553f0fffe8cc8f1971ee"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.395083 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hx7tv" event={"ID":"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b","Type":"ContainerStarted","Data":"f20e9dc32e437145f802a537fa2665afc2ae79a191175483ec92ca0e4108918e"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.420003 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-83de-account-create-update-fh75b" event={"ID":"54905e17-d443-4465-8f70-7be04a89086f","Type":"ContainerStarted","Data":"bf410eadcd21b6c409b08a23916bc0ac4d5ba43505387a89c251ab098b87e562"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.420056 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-83de-account-create-update-fh75b" event={"ID":"54905e17-d443-4465-8f70-7be04a89086f","Type":"ContainerStarted","Data":"944bbce44653f332757b41aa709ff915e239c63242463d545f1454d580e8215e"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.443340 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d162-account-create-update-khb5j"] Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.444795 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k7zc9" event={"ID":"e562d506-21d2-4edd-90b8-97bd11bf068e","Type":"ContainerStarted","Data":"c0e6439979838c98e66157164ef8073f70f7245c52bc8c72b4753a2777fab786"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.444909 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k7zc9" event={"ID":"e562d506-21d2-4edd-90b8-97bd11bf068e","Type":"ContainerStarted","Data":"7bdbd8741734a15b799a8b486b726df845fdab465fc49bc23c59d7482c73e7d5"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.462322 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-hx7tv" podStartSLOduration=8.462284608 podStartE2EDuration="8.462284608s" podCreationTimestamp="2026-02-17 14:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:40.419772583 +0000 UTC m=+1226.762700852" watchObservedRunningTime="2026-02-17 14:26:40.462284608 +0000 UTC m=+1226.805212867" Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.495984 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.496401 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" event={"ID":"ae1de151-2799-49ba-839c-70e035c6f1d5","Type":"ContainerDied","Data":"124b726413d3b60c95f594f724c67ccaf14d05521a568a0d9ac52ffab7dc6d70"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.496569 4836 scope.go:117] "RemoveContainer" containerID="9e306714806082c8f26efe22ad79196500631adf5249bfbc7b5f3c70a80f192f" Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.500822 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-83de-account-create-update-fh75b" podStartSLOduration=8.500793959 podStartE2EDuration="8.500793959s" podCreationTimestamp="2026-02-17 14:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:40.449756727 +0000 UTC m=+1226.792684996" watchObservedRunningTime="2026-02-17 14:26:40.500793959 +0000 UTC m=+1226.843722228" Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.512737 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-k7zc9" podStartSLOduration=8.512710094 podStartE2EDuration="8.512710094s" podCreationTimestamp="2026-02-17 14:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:40.495879138 +0000 UTC m=+1226.838807407" watchObservedRunningTime="2026-02-17 14:26:40.512710094 +0000 UTC m=+1226.855638363" Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.550618 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8f3-account-create-update-kmlvm" event={"ID":"2ae1659d-7892-4744-a570-4ba7c65e4caf","Type":"ContainerStarted","Data":"bda2c6a640050c54150d82f44c6e78a2f7107b79ee0b4f6fd03e4d8c6e1019d3"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.643889 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d8f3-account-create-update-kmlvm" podStartSLOduration=8.643867268 podStartE2EDuration="8.643867268s" podCreationTimestamp="2026-02-17 14:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:40.601695372 +0000 UTC m=+1226.944623641" watchObservedRunningTime="2026-02-17 14:26:40.643867268 +0000 UTC m=+1226.986795537" Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.694921 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dbzmx"] Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.694977 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"039a526c-4f5a-4641-9340-b18459145569","Type":"ContainerStarted","Data":"03a6dc27159e9ff8f5c2d8c4e46d28e3fd0b9d571e2b143e652c2a068b3a073e"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.695004 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerStarted","Data":"5de26698cc194f27aa6fa46281e03b3fa0bc2faa6bf0ef9b745f3fec33e05835"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.695026 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hp877"] Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.699980 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pn587" event={"ID":"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b","Type":"ContainerStarted","Data":"337993452ec5730e0c3cc016c9ba757fc9a43025082efe848b2e9b3eeab12528"} Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.716824 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hp877"] Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.721763 4836 scope.go:117] "RemoveContainer" containerID="6238b015658e3e1a044d71695f4d830d8dd3bda46833739bdcd6ad73a556976d" Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.932587 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wbh2w"] Feb 17 14:26:40 crc kubenswrapper[4836]: I0217 14:26:40.952729 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c8vxs"] Feb 17 14:26:40 crc kubenswrapper[4836]: W0217 14:26:40.953506 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21d02a34_d68b_4cae_9f03_0b15d07fe948.slice/crio-f11b06be301a9f9b682d6007240d8ae8a505a1f6da93c11fc5e55f0036da5ae0 WatchSource:0}: Error finding container f11b06be301a9f9b682d6007240d8ae8a505a1f6da93c11fc5e55f0036da5ae0: Status 404 returned error can't find the container with id f11b06be301a9f9b682d6007240d8ae8a505a1f6da93c11fc5e55f0036da5ae0 Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.652400 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-hp877" podUID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.724111 4836 generic.go:334] "Generic (PLEG): container finished" podID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerID="44931b40ada4bc7bee4acb5d1054d14507951ed9df360a9eb97ae5e6b0efb503" exitCode=0 Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.724211 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wbh2w" event={"ID":"312259c2-4f8f-401d-a19e-64d0bc7dd35f","Type":"ContainerDied","Data":"44931b40ada4bc7bee4acb5d1054d14507951ed9df360a9eb97ae5e6b0efb503"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.724247 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wbh2w" event={"ID":"312259c2-4f8f-401d-a19e-64d0bc7dd35f","Type":"ContainerStarted","Data":"c6f4101d16fd86bcceb0625244616ff16d1c5665adecebcc6d46b7d7f983a200"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.729959 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d162-account-create-update-khb5j" event={"ID":"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5","Type":"ContainerStarted","Data":"55c6c8d1d911f68476c5d07d35dec7d57e500cdc1c29d64681255555160897dd"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.730042 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d162-account-create-update-khb5j" event={"ID":"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5","Type":"ContainerStarted","Data":"4488137d41693a0eed0cf3344bd79369971085583a9f2a449f30914ec350a79a"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.737073 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pn587" event={"ID":"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b","Type":"ContainerStarted","Data":"0179fb4c7564ecef52fa63a2f91fe687b3340cb3f7aaa46ff46f4ec68e5ee26d"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.762200 4836 generic.go:334] "Generic (PLEG): container finished" podID="add50d48-0a1c-4d2f-bcc3-ae9355e95c3b" containerID="8f88022ab4daa99006c48416f95fa6fcf0ec231af3f8553f0fffe8cc8f1971ee" exitCode=0 Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.762328 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hx7tv" event={"ID":"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b","Type":"ContainerDied","Data":"8f88022ab4daa99006c48416f95fa6fcf0ec231af3f8553f0fffe8cc8f1971ee"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.764627 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dbzmx" event={"ID":"cb33695b-c451-44b2-8a2a-fe534a4040e3","Type":"ContainerStarted","Data":"0863004180b5c7074ba22f1ddb8c58005ebe6a0d2ac8583efc764697e8242881"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.766244 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8vxs" event={"ID":"21d02a34-d68b-4cae-9f03-0b15d07fe948","Type":"ContainerStarted","Data":"4c54331d8c22a82e7135a4bdfa56b01c1bacccea5967146f9a8bb1c17d9ca3da"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.766284 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8vxs" event={"ID":"21d02a34-d68b-4cae-9f03-0b15d07fe948","Type":"ContainerStarted","Data":"f11b06be301a9f9b682d6007240d8ae8a505a1f6da93c11fc5e55f0036da5ae0"} Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.789019 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-pn587" podStartSLOduration=5.788988959 podStartE2EDuration="5.788988959s" podCreationTimestamp="2026-02-17 14:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:41.769254257 +0000 UTC m=+1228.112182526" watchObservedRunningTime="2026-02-17 14:26:41.788988959 +0000 UTC m=+1228.131917238" Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.813827 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-d162-account-create-update-khb5j" podStartSLOduration=5.813790666 podStartE2EDuration="5.813790666s" podCreationTimestamp="2026-02-17 14:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:41.785885917 +0000 UTC m=+1228.128814186" watchObservedRunningTime="2026-02-17 14:26:41.813790666 +0000 UTC m=+1228.156718935" Feb 17 14:26:41 crc kubenswrapper[4836]: I0217 14:26:41.922845 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-c8vxs" podStartSLOduration=3.922817854 podStartE2EDuration="3.922817854s" podCreationTimestamp="2026-02-17 14:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:41.816729674 +0000 UTC m=+1228.159657943" watchObservedRunningTime="2026-02-17 14:26:41.922817854 +0000 UTC m=+1228.265746123" Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.583754 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae1de151-2799-49ba-839c-70e035c6f1d5" path="/var/lib/kubelet/pods/ae1de151-2799-49ba-839c-70e035c6f1d5/volumes" Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.782019 4836 generic.go:334] "Generic (PLEG): container finished" podID="77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b" containerID="0179fb4c7564ecef52fa63a2f91fe687b3340cb3f7aaa46ff46f4ec68e5ee26d" exitCode=0 Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.782149 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pn587" event={"ID":"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b","Type":"ContainerDied","Data":"0179fb4c7564ecef52fa63a2f91fe687b3340cb3f7aaa46ff46f4ec68e5ee26d"} Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.785198 4836 generic.go:334] "Generic (PLEG): container finished" podID="e562d506-21d2-4edd-90b8-97bd11bf068e" containerID="c0e6439979838c98e66157164ef8073f70f7245c52bc8c72b4753a2777fab786" exitCode=0 Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.785280 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k7zc9" event={"ID":"e562d506-21d2-4edd-90b8-97bd11bf068e","Type":"ContainerDied","Data":"c0e6439979838c98e66157164ef8073f70f7245c52bc8c72b4753a2777fab786"} Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.790037 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wbh2w" event={"ID":"312259c2-4f8f-401d-a19e-64d0bc7dd35f","Type":"ContainerStarted","Data":"de75bc86bd0570fcef07a3f3195cfec352721b59eef66e22b061ebca87ca6456"} Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.790165 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.792092 4836 generic.go:334] "Generic (PLEG): container finished" podID="ec9408e6-0474-4f84-842e-b1c20f42a7b8" containerID="1e0077eb33d7cdccabd3d53eadba26bb33ef9899ccdc0c0e3003d7b300233249" exitCode=0 Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.792173 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec9408e6-0474-4f84-842e-b1c20f42a7b8","Type":"ContainerDied","Data":"1e0077eb33d7cdccabd3d53eadba26bb33ef9899ccdc0c0e3003d7b300233249"} Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.804984 4836 generic.go:334] "Generic (PLEG): container finished" podID="6f866bb7-5209-4275-8884-df6f074b3f7c" containerID="85576fe15acb4ec82e880a96b65a7ac8f381e29f3114bed6ed63c37985fe03f0" exitCode=0 Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.805166 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6f866bb7-5209-4275-8884-df6f074b3f7c","Type":"ContainerDied","Data":"85576fe15acb4ec82e880a96b65a7ac8f381e29f3114bed6ed63c37985fe03f0"} Feb 17 14:26:42 crc kubenswrapper[4836]: I0217 14:26:42.884848 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-wbh2w" podStartSLOduration=8.884823284 podStartE2EDuration="8.884823284s" podCreationTimestamp="2026-02-17 14:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:42.8722132 +0000 UTC m=+1229.215141479" watchObservedRunningTime="2026-02-17 14:26:42.884823284 +0000 UTC m=+1229.227751553" Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.342907 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:43 crc kubenswrapper[4836]: E0217 14:26:43.343163 4836 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 14:26:43 crc kubenswrapper[4836]: E0217 14:26:43.343195 4836 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 14:26:43 crc kubenswrapper[4836]: E0217 14:26:43.343274 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift podName:e482046c-502a-4f41-b013-7b3ef1c71ee1 nodeName:}" failed. No retries permitted until 2026-02-17 14:26:51.343248737 +0000 UTC m=+1237.686177006 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift") pod "swift-storage-0" (UID: "e482046c-502a-4f41-b013-7b3ef1c71ee1") : configmap "swift-ring-files" not found Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.818652 4836 generic.go:334] "Generic (PLEG): container finished" podID="2ae1659d-7892-4744-a570-4ba7c65e4caf" containerID="bda2c6a640050c54150d82f44c6e78a2f7107b79ee0b4f6fd03e4d8c6e1019d3" exitCode=0 Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.818748 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8f3-account-create-update-kmlvm" event={"ID":"2ae1659d-7892-4744-a570-4ba7c65e4caf","Type":"ContainerDied","Data":"bda2c6a640050c54150d82f44c6e78a2f7107b79ee0b4f6fd03e4d8c6e1019d3"} Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.820816 4836 generic.go:334] "Generic (PLEG): container finished" podID="1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5" containerID="55c6c8d1d911f68476c5d07d35dec7d57e500cdc1c29d64681255555160897dd" exitCode=0 Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.820881 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d162-account-create-update-khb5j" event={"ID":"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5","Type":"ContainerDied","Data":"55c6c8d1d911f68476c5d07d35dec7d57e500cdc1c29d64681255555160897dd"} Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.823037 4836 generic.go:334] "Generic (PLEG): container finished" podID="54905e17-d443-4465-8f70-7be04a89086f" containerID="bf410eadcd21b6c409b08a23916bc0ac4d5ba43505387a89c251ab098b87e562" exitCode=0 Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.823115 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-83de-account-create-update-fh75b" event={"ID":"54905e17-d443-4465-8f70-7be04a89086f","Type":"ContainerDied","Data":"bf410eadcd21b6c409b08a23916bc0ac4d5ba43505387a89c251ab098b87e562"} Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.825429 4836 generic.go:334] "Generic (PLEG): container finished" podID="21d02a34-d68b-4cae-9f03-0b15d07fe948" containerID="4c54331d8c22a82e7135a4bdfa56b01c1bacccea5967146f9a8bb1c17d9ca3da" exitCode=0 Feb 17 14:26:43 crc kubenswrapper[4836]: I0217 14:26:43.825630 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8vxs" event={"ID":"21d02a34-d68b-4cae-9f03-0b15d07fe948","Type":"ContainerDied","Data":"4c54331d8c22a82e7135a4bdfa56b01c1bacccea5967146f9a8bb1c17d9ca3da"} Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.399348 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pn587" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.405603 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.583779 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96b57\" (UniqueName: \"kubernetes.io/projected/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-kube-api-access-96b57\") pod \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\" (UID: \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\") " Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.583835 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-operator-scripts\") pod \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\" (UID: \"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b\") " Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.583996 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-operator-scripts\") pod \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\" (UID: \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\") " Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.584071 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfzfp\" (UniqueName: \"kubernetes.io/projected/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-kube-api-access-sfzfp\") pod \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\" (UID: \"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b\") " Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.585035 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "add50d48-0a1c-4d2f-bcc3-ae9355e95c3b" (UID: "add50d48-0a1c-4d2f-bcc3-ae9355e95c3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.585035 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b" (UID: "77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.592917 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-kube-api-access-96b57" (OuterVolumeSpecName: "kube-api-access-96b57") pod "77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b" (UID: "77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b"). InnerVolumeSpecName "kube-api-access-96b57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.612976 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-kube-api-access-sfzfp" (OuterVolumeSpecName: "kube-api-access-sfzfp") pod "add50d48-0a1c-4d2f-bcc3-ae9355e95c3b" (UID: "add50d48-0a1c-4d2f-bcc3-ae9355e95c3b"). InnerVolumeSpecName "kube-api-access-sfzfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.686019 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96b57\" (UniqueName: \"kubernetes.io/projected/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-kube-api-access-96b57\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.686047 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.686056 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.686065 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfzfp\" (UniqueName: \"kubernetes.io/projected/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b-kube-api-access-sfzfp\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.835470 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pn587" event={"ID":"77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b","Type":"ContainerDied","Data":"337993452ec5730e0c3cc016c9ba757fc9a43025082efe848b2e9b3eeab12528"} Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.835536 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337993452ec5730e0c3cc016c9ba757fc9a43025082efe848b2e9b3eeab12528" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.835496 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pn587" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.836666 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hx7tv" event={"ID":"add50d48-0a1c-4d2f-bcc3-ae9355e95c3b","Type":"ContainerDied","Data":"f20e9dc32e437145f802a537fa2665afc2ae79a191175483ec92ca0e4108918e"} Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.836715 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f20e9dc32e437145f802a537fa2665afc2ae79a191175483ec92ca0e4108918e" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.836681 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hx7tv" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.838623 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"039a526c-4f5a-4641-9340-b18459145569","Type":"ContainerStarted","Data":"cbe9c1822db3e4df38f03422ecc405cf112afecffa429cbfc14cbe462e4d38fe"} Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.839105 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.844251 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.849666 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerStarted","Data":"2634435ab0e106f5ce9041eacdd8794376187c382228fa8d9f52a71bd9ec4553"} Feb 17 14:26:44 crc kubenswrapper[4836]: I0217 14:26:44.880340 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=29.436356179 podStartE2EDuration="1m0.88030862s" podCreationTimestamp="2026-02-17 14:25:44 +0000 UTC" firstStartedPulling="2026-02-17 14:26:08.219685214 +0000 UTC m=+1194.562613483" lastFinishedPulling="2026-02-17 14:26:39.663637655 +0000 UTC m=+1226.006565924" observedRunningTime="2026-02-17 14:26:44.862773765 +0000 UTC m=+1231.205702054" watchObservedRunningTime="2026-02-17 14:26:44.88030862 +0000 UTC m=+1231.223236889" Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.863181 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d162-account-create-update-khb5j" event={"ID":"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5","Type":"ContainerDied","Data":"4488137d41693a0eed0cf3344bd79369971085583a9f2a449f30914ec350a79a"} Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.863511 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4488137d41693a0eed0cf3344bd79369971085583a9f2a449f30914ec350a79a" Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.870167 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-83de-account-create-update-fh75b" event={"ID":"54905e17-d443-4465-8f70-7be04a89086f","Type":"ContainerDied","Data":"944bbce44653f332757b41aa709ff915e239c63242463d545f1454d580e8215e"} Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.870224 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="944bbce44653f332757b41aa709ff915e239c63242463d545f1454d580e8215e" Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.872906 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k7zc9" event={"ID":"e562d506-21d2-4edd-90b8-97bd11bf068e","Type":"ContainerDied","Data":"7bdbd8741734a15b799a8b486b726df845fdab465fc49bc23c59d7482c73e7d5"} Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.872935 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bdbd8741734a15b799a8b486b726df845fdab465fc49bc23c59d7482c73e7d5" Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.874821 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8vxs" event={"ID":"21d02a34-d68b-4cae-9f03-0b15d07fe948","Type":"ContainerDied","Data":"f11b06be301a9f9b682d6007240d8ae8a505a1f6da93c11fc5e55f0036da5ae0"} Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.874843 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f11b06be301a9f9b682d6007240d8ae8a505a1f6da93c11fc5e55f0036da5ae0" Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.876308 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8f3-account-create-update-kmlvm" event={"ID":"2ae1659d-7892-4744-a570-4ba7c65e4caf","Type":"ContainerDied","Data":"053f72cbdf2c7e5f30db435af69e5bbe1df08b8271492028d870e534720e3fc6"} Feb 17 14:26:45 crc kubenswrapper[4836]: I0217 14:26:45.876335 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053f72cbdf2c7e5f30db435af69e5bbe1df08b8271492028d870e534720e3fc6" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.011163 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.072765 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.132409 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54905e17-d443-4465-8f70-7be04a89086f-operator-scripts\") pod \"54905e17-d443-4465-8f70-7be04a89086f\" (UID: \"54905e17-d443-4465-8f70-7be04a89086f\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.133009 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54905e17-d443-4465-8f70-7be04a89086f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54905e17-d443-4465-8f70-7be04a89086f" (UID: "54905e17-d443-4465-8f70-7be04a89086f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.134446 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8nc5\" (UniqueName: \"kubernetes.io/projected/54905e17-d443-4465-8f70-7be04a89086f-kube-api-access-r8nc5\") pod \"54905e17-d443-4465-8f70-7be04a89086f\" (UID: \"54905e17-d443-4465-8f70-7be04a89086f\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.135240 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54905e17-d443-4465-8f70-7be04a89086f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.142767 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.150833 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54905e17-d443-4465-8f70-7be04a89086f-kube-api-access-r8nc5" (OuterVolumeSpecName: "kube-api-access-r8nc5") pod "54905e17-d443-4465-8f70-7be04a89086f" (UID: "54905e17-d443-4465-8f70-7be04a89086f"). InnerVolumeSpecName "kube-api-access-r8nc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.230610 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.235721 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9ljc\" (UniqueName: \"kubernetes.io/projected/2ae1659d-7892-4744-a570-4ba7c65e4caf-kube-api-access-g9ljc\") pod \"2ae1659d-7892-4744-a570-4ba7c65e4caf\" (UID: \"2ae1659d-7892-4744-a570-4ba7c65e4caf\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.235800 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d02a34-d68b-4cae-9f03-0b15d07fe948-operator-scripts\") pod \"21d02a34-d68b-4cae-9f03-0b15d07fe948\" (UID: \"21d02a34-d68b-4cae-9f03-0b15d07fe948\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.235850 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stl2m\" (UniqueName: \"kubernetes.io/projected/e562d506-21d2-4edd-90b8-97bd11bf068e-kube-api-access-stl2m\") pod \"e562d506-21d2-4edd-90b8-97bd11bf068e\" (UID: \"e562d506-21d2-4edd-90b8-97bd11bf068e\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.235885 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1659d-7892-4744-a570-4ba7c65e4caf-operator-scripts\") pod \"2ae1659d-7892-4744-a570-4ba7c65e4caf\" (UID: \"2ae1659d-7892-4744-a570-4ba7c65e4caf\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.235920 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e562d506-21d2-4edd-90b8-97bd11bf068e-operator-scripts\") pod \"e562d506-21d2-4edd-90b8-97bd11bf068e\" (UID: \"e562d506-21d2-4edd-90b8-97bd11bf068e\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.235954 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f86lz\" (UniqueName: \"kubernetes.io/projected/21d02a34-d68b-4cae-9f03-0b15d07fe948-kube-api-access-f86lz\") pod \"21d02a34-d68b-4cae-9f03-0b15d07fe948\" (UID: \"21d02a34-d68b-4cae-9f03-0b15d07fe948\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.236226 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8nc5\" (UniqueName: \"kubernetes.io/projected/54905e17-d443-4465-8f70-7be04a89086f-kube-api-access-r8nc5\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.237083 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d02a34-d68b-4cae-9f03-0b15d07fe948-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21d02a34-d68b-4cae-9f03-0b15d07fe948" (UID: "21d02a34-d68b-4cae-9f03-0b15d07fe948"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.237470 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae1659d-7892-4744-a570-4ba7c65e4caf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ae1659d-7892-4744-a570-4ba7c65e4caf" (UID: "2ae1659d-7892-4744-a570-4ba7c65e4caf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.237564 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e562d506-21d2-4edd-90b8-97bd11bf068e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e562d506-21d2-4edd-90b8-97bd11bf068e" (UID: "e562d506-21d2-4edd-90b8-97bd11bf068e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.241030 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e562d506-21d2-4edd-90b8-97bd11bf068e-kube-api-access-stl2m" (OuterVolumeSpecName: "kube-api-access-stl2m") pod "e562d506-21d2-4edd-90b8-97bd11bf068e" (UID: "e562d506-21d2-4edd-90b8-97bd11bf068e"). InnerVolumeSpecName "kube-api-access-stl2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.245937 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.246263 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d02a34-d68b-4cae-9f03-0b15d07fe948-kube-api-access-f86lz" (OuterVolumeSpecName: "kube-api-access-f86lz") pod "21d02a34-d68b-4cae-9f03-0b15d07fe948" (UID: "21d02a34-d68b-4cae-9f03-0b15d07fe948"). InnerVolumeSpecName "kube-api-access-f86lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.246345 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae1659d-7892-4744-a570-4ba7c65e4caf-kube-api-access-g9ljc" (OuterVolumeSpecName: "kube-api-access-g9ljc") pod "2ae1659d-7892-4744-a570-4ba7c65e4caf" (UID: "2ae1659d-7892-4744-a570-4ba7c65e4caf"). InnerVolumeSpecName "kube-api-access-g9ljc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.338769 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d02a34-d68b-4cae-9f03-0b15d07fe948-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.339418 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stl2m\" (UniqueName: \"kubernetes.io/projected/e562d506-21d2-4edd-90b8-97bd11bf068e-kube-api-access-stl2m\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.339753 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1659d-7892-4744-a570-4ba7c65e4caf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.339824 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e562d506-21d2-4edd-90b8-97bd11bf068e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.339843 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f86lz\" (UniqueName: \"kubernetes.io/projected/21d02a34-d68b-4cae-9f03-0b15d07fe948-kube-api-access-f86lz\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.339859 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9ljc\" (UniqueName: \"kubernetes.io/projected/2ae1659d-7892-4744-a570-4ba7c65e4caf-kube-api-access-g9ljc\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.441487 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-operator-scripts\") pod \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\" (UID: \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.441711 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sfbb\" (UniqueName: \"kubernetes.io/projected/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-kube-api-access-8sfbb\") pod \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\" (UID: \"1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5\") " Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.443082 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5" (UID: "1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.447969 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-kube-api-access-8sfbb" (OuterVolumeSpecName: "kube-api-access-8sfbb") pod "1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5" (UID: "1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5"). InnerVolumeSpecName "kube-api-access-8sfbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.547861 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sfbb\" (UniqueName: \"kubernetes.io/projected/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-kube-api-access-8sfbb\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.547968 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.887958 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6f866bb7-5209-4275-8884-df6f074b3f7c","Type":"ContainerStarted","Data":"a5c50e91fbe9d5bf5c447d513b9cd45546c7f0ab529bc7790065740b89966019"} Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.889665 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dbzmx" event={"ID":"cb33695b-c451-44b2-8a2a-fe534a4040e3","Type":"ContainerStarted","Data":"3f3e6d9b2f9b81e95f3278234cf18a3d4bff52824dc7f44df99e615056b57f74"} Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.890251 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.894518 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec9408e6-0474-4f84-842e-b1c20f42a7b8","Type":"ContainerStarted","Data":"a85bb5d25c822d0a6fbd4857f4d63038e54f36103237d14bf65da5288ba6755c"} Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.894565 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k7zc9" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.895005 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d162-account-create-update-khb5j" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.895059 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-83de-account-create-update-fh75b" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.895065 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8vxs" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.895182 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8f3-account-create-update-kmlvm" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.928527 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dbzmx" podStartSLOduration=5.785915217 podStartE2EDuration="10.928500809s" podCreationTimestamp="2026-02-17 14:26:36 +0000 UTC" firstStartedPulling="2026-02-17 14:26:40.721955617 +0000 UTC m=+1227.064883886" lastFinishedPulling="2026-02-17 14:26:45.864541209 +0000 UTC m=+1232.207469478" observedRunningTime="2026-02-17 14:26:46.917141249 +0000 UTC m=+1233.260069528" watchObservedRunningTime="2026-02-17 14:26:46.928500809 +0000 UTC m=+1233.271429078" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.953054 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.335145136 podStartE2EDuration="1m9.95303316s" podCreationTimestamp="2026-02-17 14:25:37 +0000 UTC" firstStartedPulling="2026-02-17 14:25:39.25139841 +0000 UTC m=+1165.594326679" lastFinishedPulling="2026-02-17 14:26:07.869286434 +0000 UTC m=+1194.212214703" observedRunningTime="2026-02-17 14:26:46.944675108 +0000 UTC m=+1233.287603387" watchObservedRunningTime="2026-02-17 14:26:46.95303316 +0000 UTC m=+1233.295961429" Feb 17 14:26:46 crc kubenswrapper[4836]: I0217 14:26:46.979222 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.04717529 podStartE2EDuration="1m10.979202933s" podCreationTimestamp="2026-02-17 14:25:36 +0000 UTC" firstStartedPulling="2026-02-17 14:25:38.624540408 +0000 UTC m=+1164.967468677" lastFinishedPulling="2026-02-17 14:26:07.556568051 +0000 UTC m=+1193.899496320" observedRunningTime="2026-02-17 14:26:46.976911962 +0000 UTC m=+1233.319840241" watchObservedRunningTime="2026-02-17 14:26:46.979202933 +0000 UTC m=+1233.322131222" Feb 17 14:26:47 crc kubenswrapper[4836]: I0217 14:26:47.831924 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="1c33fb01-9bf7-43f1-86d5-004e70d3721c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 14:26:48 crc kubenswrapper[4836]: I0217 14:26:48.073045 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 14:26:49 crc kubenswrapper[4836]: I0217 14:26:49.113086 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 14:26:49 crc kubenswrapper[4836]: I0217 14:26:49.747496 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:26:49 crc kubenswrapper[4836]: I0217 14:26:49.828768 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bpss8"] Feb 17 14:26:49 crc kubenswrapper[4836]: I0217 14:26:49.829095 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" podUID="7e0a6937-945b-48fc-a328-6715e10ffddc" containerName="dnsmasq-dns" containerID="cri-o://a4dd5c55405656df129bfcb6d3d7edde886d28467877f4823411944db38277ef" gracePeriod=10 Feb 17 14:26:50 crc kubenswrapper[4836]: I0217 14:26:50.108747 4836 generic.go:334] "Generic (PLEG): container finished" podID="7e0a6937-945b-48fc-a328-6715e10ffddc" containerID="a4dd5c55405656df129bfcb6d3d7edde886d28467877f4823411944db38277ef" exitCode=0 Feb 17 14:26:50 crc kubenswrapper[4836]: I0217 14:26:50.108945 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" event={"ID":"7e0a6937-945b-48fc-a328-6715e10ffddc","Type":"ContainerDied","Data":"a4dd5c55405656df129bfcb6d3d7edde886d28467877f4823411944db38277ef"} Feb 17 14:26:50 crc kubenswrapper[4836]: I0217 14:26:50.259650 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-c8vxs"] Feb 17 14:26:50 crc kubenswrapper[4836]: I0217 14:26:50.271397 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-c8vxs"] Feb 17 14:26:50 crc kubenswrapper[4836]: I0217 14:26:50.582587 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d02a34-d68b-4cae-9f03-0b15d07fe948" path="/var/lib/kubelet/pods/21d02a34-d68b-4cae-9f03-0b15d07fe948/volumes" Feb 17 14:26:50 crc kubenswrapper[4836]: I0217 14:26:50.837999 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.012939 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-sb\") pod \"7e0a6937-945b-48fc-a328-6715e10ffddc\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.013013 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-config\") pod \"7e0a6937-945b-48fc-a328-6715e10ffddc\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.013111 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4bn\" (UniqueName: \"kubernetes.io/projected/7e0a6937-945b-48fc-a328-6715e10ffddc-kube-api-access-tx4bn\") pod \"7e0a6937-945b-48fc-a328-6715e10ffddc\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.013226 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-nb\") pod \"7e0a6937-945b-48fc-a328-6715e10ffddc\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.013321 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-dns-svc\") pod \"7e0a6937-945b-48fc-a328-6715e10ffddc\" (UID: \"7e0a6937-945b-48fc-a328-6715e10ffddc\") " Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.123853 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0a6937-945b-48fc-a328-6715e10ffddc-kube-api-access-tx4bn" (OuterVolumeSpecName: "kube-api-access-tx4bn") pod "7e0a6937-945b-48fc-a328-6715e10ffddc" (UID: "7e0a6937-945b-48fc-a328-6715e10ffddc"). InnerVolumeSpecName "kube-api-access-tx4bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.141699 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerStarted","Data":"839af704fe28aeff5f1ab20ca6e7c7a0fb25790fc5bc232fe9131c132f8e0bf9"} Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.144791 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" event={"ID":"7e0a6937-945b-48fc-a328-6715e10ffddc","Type":"ContainerDied","Data":"ff5ecfc3d719da4b799fbc70b95c4645eaf91702ed47ae7bfdec7b990d4e151b"} Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.145009 4836 scope.go:117] "RemoveContainer" containerID="a4dd5c55405656df129bfcb6d3d7edde886d28467877f4823411944db38277ef" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.145217 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bpss8" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.218978 4836 scope.go:117] "RemoveContainer" containerID="5d73acc7d3b7d21dfd57bd1f5f6891bf754918c51d20232beb9b0071a1de3710" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.219083 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.439883842 podStartE2EDuration="1m7.219050666s" podCreationTimestamp="2026-02-17 14:25:44 +0000 UTC" firstStartedPulling="2026-02-17 14:26:06.510550124 +0000 UTC m=+1192.853478393" lastFinishedPulling="2026-02-17 14:26:50.289716948 +0000 UTC m=+1236.632645217" observedRunningTime="2026-02-17 14:26:51.192702944 +0000 UTC m=+1237.535631223" watchObservedRunningTime="2026-02-17 14:26:51.219050666 +0000 UTC m=+1237.561978935" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.219848 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e0a6937-945b-48fc-a328-6715e10ffddc" (UID: "7e0a6937-945b-48fc-a328-6715e10ffddc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.224596 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.224620 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4bn\" (UniqueName: \"kubernetes.io/projected/7e0a6937-945b-48fc-a328-6715e10ffddc-kube-api-access-tx4bn\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.229726 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-config" (OuterVolumeSpecName: "config") pod "7e0a6937-945b-48fc-a328-6715e10ffddc" (UID: "7e0a6937-945b-48fc-a328-6715e10ffddc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.240114 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e0a6937-945b-48fc-a328-6715e10ffddc" (UID: "7e0a6937-945b-48fc-a328-6715e10ffddc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.244532 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e0a6937-945b-48fc-a328-6715e10ffddc" (UID: "7e0a6937-945b-48fc-a328-6715e10ffddc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.326615 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.328660 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.328791 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a6937-945b-48fc-a328-6715e10ffddc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.430874 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.431315 4836 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.431785 4836 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.431899 4836 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift podName:e482046c-502a-4f41-b013-7b3ef1c71ee1 nodeName:}" failed. No retries permitted until 2026-02-17 14:27:07.431860228 +0000 UTC m=+1253.774788497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift") pod "swift-storage-0" (UID: "e482046c-502a-4f41-b013-7b3ef1c71ee1") : configmap "swift-ring-files" not found Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.489507 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bpss8"] Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.506820 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bpss8"] Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816218 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-z8g7x"] Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816648 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54905e17-d443-4465-8f70-7be04a89086f" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816662 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="54905e17-d443-4465-8f70-7be04a89086f" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816674 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerName="dnsmasq-dns" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816680 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerName="dnsmasq-dns" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816693 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0a6937-945b-48fc-a328-6715e10ffddc" containerName="dnsmasq-dns" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816700 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0a6937-945b-48fc-a328-6715e10ffddc" containerName="dnsmasq-dns" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816708 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d02a34-d68b-4cae-9f03-0b15d07fe948" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816715 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d02a34-d68b-4cae-9f03-0b15d07fe948" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816735 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add50d48-0a1c-4d2f-bcc3-ae9355e95c3b" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816742 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="add50d48-0a1c-4d2f-bcc3-ae9355e95c3b" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816755 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816761 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816774 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e562d506-21d2-4edd-90b8-97bd11bf068e" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816780 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e562d506-21d2-4edd-90b8-97bd11bf068e" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816791 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816796 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816806 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae1659d-7892-4744-a570-4ba7c65e4caf" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816812 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae1659d-7892-4744-a570-4ba7c65e4caf" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816822 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerName="init" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816828 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerName="init" Feb 17 14:26:51 crc kubenswrapper[4836]: E0217 14:26:51.816841 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0a6937-945b-48fc-a328-6715e10ffddc" containerName="init" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.816847 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0a6937-945b-48fc-a328-6715e10ffddc" containerName="init" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817004 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae1659d-7892-4744-a570-4ba7c65e4caf" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817027 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1de151-2799-49ba-839c-70e035c6f1d5" containerName="dnsmasq-dns" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817035 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="e562d506-21d2-4edd-90b8-97bd11bf068e" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817049 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817061 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0a6937-945b-48fc-a328-6715e10ffddc" containerName="dnsmasq-dns" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817072 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="54905e17-d443-4465-8f70-7be04a89086f" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817087 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d02a34-d68b-4cae-9f03-0b15d07fe948" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817102 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="add50d48-0a1c-4d2f-bcc3-ae9355e95c3b" containerName="mariadb-database-create" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817112 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5" containerName="mariadb-account-create-update" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.817834 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.820495 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.820854 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qbbvn" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.835021 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z8g7x"] Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.886168 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ghk5k" podUID="5949d44f-ef6d-417e-9035-9b235cd59863" containerName="ovn-controller" probeResult="failure" output=< Feb 17 14:26:51 crc kubenswrapper[4836]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 14:26:51 crc kubenswrapper[4836]: > Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.942608 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grffb\" (UniqueName: \"kubernetes.io/projected/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-kube-api-access-grffb\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.943160 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-combined-ca-bundle\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.943315 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-config-data\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:51 crc kubenswrapper[4836]: I0217 14:26:51.943571 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-db-sync-config-data\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.107036 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-combined-ca-bundle\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.107099 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-config-data\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.107189 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-db-sync-config-data\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.107243 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grffb\" (UniqueName: \"kubernetes.io/projected/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-kube-api-access-grffb\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.121808 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-db-sync-config-data\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.131891 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-config-data\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.132245 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-combined-ca-bundle\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.141608 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grffb\" (UniqueName: \"kubernetes.io/projected/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-kube-api-access-grffb\") pod \"glance-db-sync-z8g7x\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.163823 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8g7x" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.587727 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0a6937-945b-48fc-a328-6715e10ffddc" path="/var/lib/kubelet/pods/7e0a6937-945b-48fc-a328-6715e10ffddc/volumes" Feb 17 14:26:52 crc kubenswrapper[4836]: I0217 14:26:52.849114 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z8g7x"] Feb 17 14:26:52 crc kubenswrapper[4836]: W0217 14:26:52.862395 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf3a6cf1_bca0_45b2_9f7c_6d483452d49d.slice/crio-a970e805deb8fc7e4ea80574fe0f4020e8d303f5c75ae4049947b41814dd24fc WatchSource:0}: Error finding container a970e805deb8fc7e4ea80574fe0f4020e8d303f5c75ae4049947b41814dd24fc: Status 404 returned error can't find the container with id a970e805deb8fc7e4ea80574fe0f4020e8d303f5c75ae4049947b41814dd24fc Feb 17 14:26:53 crc kubenswrapper[4836]: I0217 14:26:53.220120 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8g7x" event={"ID":"df3a6cf1-bca0-45b2-9f7c-6d483452d49d","Type":"ContainerStarted","Data":"a970e805deb8fc7e4ea80574fe0f4020e8d303f5c75ae4049947b41814dd24fc"} Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.251825 4836 generic.go:334] "Generic (PLEG): container finished" podID="cb33695b-c451-44b2-8a2a-fe534a4040e3" containerID="3f3e6d9b2f9b81e95f3278234cf18a3d4bff52824dc7f44df99e615056b57f74" exitCode=0 Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.252346 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dbzmx" event={"ID":"cb33695b-c451-44b2-8a2a-fe534a4040e3","Type":"ContainerDied","Data":"3f3e6d9b2f9b81e95f3278234cf18a3d4bff52824dc7f44df99e615056b57f74"} Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.257448 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h9gmq"] Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.259244 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h9gmq" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.264655 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.274736 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h9gmq"] Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.446547 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa6524b-2b3f-47c3-b55f-1435685df59d-operator-scripts\") pod \"root-account-create-update-h9gmq\" (UID: \"caa6524b-2b3f-47c3-b55f-1435685df59d\") " pod="openstack/root-account-create-update-h9gmq" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.446805 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5f4x\" (UniqueName: \"kubernetes.io/projected/caa6524b-2b3f-47c3-b55f-1435685df59d-kube-api-access-v5f4x\") pod \"root-account-create-update-h9gmq\" (UID: \"caa6524b-2b3f-47c3-b55f-1435685df59d\") " pod="openstack/root-account-create-update-h9gmq" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.471656 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.549672 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa6524b-2b3f-47c3-b55f-1435685df59d-operator-scripts\") pod \"root-account-create-update-h9gmq\" (UID: \"caa6524b-2b3f-47c3-b55f-1435685df59d\") " pod="openstack/root-account-create-update-h9gmq" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.549820 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5f4x\" (UniqueName: \"kubernetes.io/projected/caa6524b-2b3f-47c3-b55f-1435685df59d-kube-api-access-v5f4x\") pod \"root-account-create-update-h9gmq\" (UID: \"caa6524b-2b3f-47c3-b55f-1435685df59d\") " pod="openstack/root-account-create-update-h9gmq" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.551006 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa6524b-2b3f-47c3-b55f-1435685df59d-operator-scripts\") pod \"root-account-create-update-h9gmq\" (UID: \"caa6524b-2b3f-47c3-b55f-1435685df59d\") " pod="openstack/root-account-create-update-h9gmq" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.581822 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5f4x\" (UniqueName: \"kubernetes.io/projected/caa6524b-2b3f-47c3-b55f-1435685df59d-kube-api-access-v5f4x\") pod \"root-account-create-update-h9gmq\" (UID: \"caa6524b-2b3f-47c3-b55f-1435685df59d\") " pod="openstack/root-account-create-update-h9gmq" Feb 17 14:26:55 crc kubenswrapper[4836]: I0217 14:26:55.583394 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h9gmq" Feb 17 14:26:56 crc kubenswrapper[4836]: I0217 14:26:56.170615 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h9gmq"] Feb 17 14:26:56 crc kubenswrapper[4836]: I0217 14:26:56.335763 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h9gmq" event={"ID":"caa6524b-2b3f-47c3-b55f-1435685df59d","Type":"ContainerStarted","Data":"4412cdd3236c16e7c55d72426203ad2b29aa25a446957f2189655406d782c8f6"} Feb 17 14:26:56 crc kubenswrapper[4836]: I0217 14:26:56.896655 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ghk5k" podUID="5949d44f-ef6d-417e-9035-9b235cd59863" containerName="ovn-controller" probeResult="failure" output=< Feb 17 14:26:56 crc kubenswrapper[4836]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 14:26:56 crc kubenswrapper[4836]: > Feb 17 14:26:56 crc kubenswrapper[4836]: I0217 14:26:56.940269 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:56 crc kubenswrapper[4836]: I0217 14:26:56.968411 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:26:56 crc kubenswrapper[4836]: I0217 14:26:56.970124 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-j4jj9" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.135754 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-combined-ca-bundle\") pod \"cb33695b-c451-44b2-8a2a-fe534a4040e3\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.137793 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-dispersionconf\") pod \"cb33695b-c451-44b2-8a2a-fe534a4040e3\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.138070 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb33695b-c451-44b2-8a2a-fe534a4040e3-etc-swift\") pod \"cb33695b-c451-44b2-8a2a-fe534a4040e3\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.138135 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlxn6\" (UniqueName: \"kubernetes.io/projected/cb33695b-c451-44b2-8a2a-fe534a4040e3-kube-api-access-tlxn6\") pod \"cb33695b-c451-44b2-8a2a-fe534a4040e3\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.138170 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-scripts\") pod \"cb33695b-c451-44b2-8a2a-fe534a4040e3\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.138261 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-ring-data-devices\") pod \"cb33695b-c451-44b2-8a2a-fe534a4040e3\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.138291 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-swiftconf\") pod \"cb33695b-c451-44b2-8a2a-fe534a4040e3\" (UID: \"cb33695b-c451-44b2-8a2a-fe534a4040e3\") " Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.139168 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "cb33695b-c451-44b2-8a2a-fe534a4040e3" (UID: "cb33695b-c451-44b2-8a2a-fe534a4040e3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.139531 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb33695b-c451-44b2-8a2a-fe534a4040e3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cb33695b-c451-44b2-8a2a-fe534a4040e3" (UID: "cb33695b-c451-44b2-8a2a-fe534a4040e3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.140392 4836 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.140422 4836 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb33695b-c451-44b2-8a2a-fe534a4040e3-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.149500 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "cb33695b-c451-44b2-8a2a-fe534a4040e3" (UID: "cb33695b-c451-44b2-8a2a-fe534a4040e3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.287182 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb33695b-c451-44b2-8a2a-fe534a4040e3-kube-api-access-tlxn6" (OuterVolumeSpecName: "kube-api-access-tlxn6") pod "cb33695b-c451-44b2-8a2a-fe534a4040e3" (UID: "cb33695b-c451-44b2-8a2a-fe534a4040e3"). InnerVolumeSpecName "kube-api-access-tlxn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.290562 4836 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.290608 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlxn6\" (UniqueName: \"kubernetes.io/projected/cb33695b-c451-44b2-8a2a-fe534a4040e3-kube-api-access-tlxn6\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.313265 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb33695b-c451-44b2-8a2a-fe534a4040e3" (UID: "cb33695b-c451-44b2-8a2a-fe534a4040e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.342965 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-scripts" (OuterVolumeSpecName: "scripts") pod "cb33695b-c451-44b2-8a2a-fe534a4040e3" (UID: "cb33695b-c451-44b2-8a2a-fe534a4040e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.344728 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "cb33695b-c451-44b2-8a2a-fe534a4040e3" (UID: "cb33695b-c451-44b2-8a2a-fe534a4040e3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.351051 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ghk5k-config-tv4tb"] Feb 17 14:26:57 crc kubenswrapper[4836]: E0217 14:26:57.351928 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb33695b-c451-44b2-8a2a-fe534a4040e3" containerName="swift-ring-rebalance" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.351964 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb33695b-c451-44b2-8a2a-fe534a4040e3" containerName="swift-ring-rebalance" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.352242 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb33695b-c451-44b2-8a2a-fe534a4040e3" containerName="swift-ring-rebalance" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.353356 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.357835 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.369110 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dbzmx" event={"ID":"cb33695b-c451-44b2-8a2a-fe534a4040e3","Type":"ContainerDied","Data":"0863004180b5c7074ba22f1ddb8c58005ebe6a0d2ac8583efc764697e8242881"} Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.369175 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0863004180b5c7074ba22f1ddb8c58005ebe6a0d2ac8583efc764697e8242881" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.369263 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dbzmx" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.379456 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ghk5k-config-tv4tb"] Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.396841 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.396951 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-additional-scripts\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.397563 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-scripts\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.397646 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-log-ovn\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.397760 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz7ht\" (UniqueName: \"kubernetes.io/projected/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-kube-api-access-xz7ht\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.397822 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run-ovn\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.397960 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.397979 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb33695b-c451-44b2-8a2a-fe534a4040e3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.397991 4836 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb33695b-c451-44b2-8a2a-fe534a4040e3-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.398937 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h9gmq" event={"ID":"caa6524b-2b3f-47c3-b55f-1435685df59d","Type":"ContainerStarted","Data":"ca8e0602e1b36f3c2d9bfabc7020988df18e6945d19646bd583313467d47a539"} Feb 17 14:26:57 crc kubenswrapper[4836]: I0217 14:26:57.455821 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-h9gmq" podStartSLOduration=2.455781994 podStartE2EDuration="2.455781994s" podCreationTimestamp="2026-02-17 14:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:57.428837576 +0000 UTC m=+1243.771765865" watchObservedRunningTime="2026-02-17 14:26:57.455781994 +0000 UTC m=+1243.798710273" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.504515 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-additional-scripts\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.505436 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-scripts\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.505487 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-log-ovn\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.505608 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz7ht\" (UniqueName: \"kubernetes.io/projected/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-kube-api-access-xz7ht\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.505693 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run-ovn\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.505872 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.506455 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.507108 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-log-ovn\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.507656 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-additional-scripts\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.507991 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run-ovn\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.511251 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-scripts\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.533484 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz7ht\" (UniqueName: \"kubernetes.io/projected/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-kube-api-access-xz7ht\") pod \"ovn-controller-ghk5k-config-tv4tb\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.762927 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:57.836899 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="1c33fb01-9bf7-43f1-86d5-004e70d3721c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:58.074908 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ec9408e6-0474-4f84-842e-b1c20f42a7b8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:58.476585 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:58.496427 4836 generic.go:334] "Generic (PLEG): container finished" podID="caa6524b-2b3f-47c3-b55f-1435685df59d" containerID="ca8e0602e1b36f3c2d9bfabc7020988df18e6945d19646bd583313467d47a539" exitCode=0 Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:58.496529 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h9gmq" event={"ID":"caa6524b-2b3f-47c3-b55f-1435685df59d","Type":"ContainerDied","Data":"ca8e0602e1b36f3c2d9bfabc7020988df18e6945d19646bd583313467d47a539"} Feb 17 14:26:58 crc kubenswrapper[4836]: I0217 14:26:58.892715 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ghk5k-config-tv4tb"] Feb 17 14:26:58 crc kubenswrapper[4836]: W0217 14:26:58.918503 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb9f3dbb_ea37_4057_97c1_b93cbb39aaec.slice/crio-c2bde1d5ba9adc61b0950993bd532e188bd3fc6a817496d9051110a296f9c5b2 WatchSource:0}: Error finding container c2bde1d5ba9adc61b0950993bd532e188bd3fc6a817496d9051110a296f9c5b2: Status 404 returned error can't find the container with id c2bde1d5ba9adc61b0950993bd532e188bd3fc6a817496d9051110a296f9c5b2 Feb 17 14:26:59 crc kubenswrapper[4836]: I0217 14:26:59.514388 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ghk5k-config-tv4tb" event={"ID":"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec","Type":"ContainerStarted","Data":"faf1f0c01e2ba58effda0101e73091532e490c7632b908240461cde1c4eacd7e"} Feb 17 14:26:59 crc kubenswrapper[4836]: I0217 14:26:59.515095 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ghk5k-config-tv4tb" event={"ID":"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec","Type":"ContainerStarted","Data":"c2bde1d5ba9adc61b0950993bd532e188bd3fc6a817496d9051110a296f9c5b2"} Feb 17 14:26:59 crc kubenswrapper[4836]: I0217 14:26:59.766046 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:26:59 crc kubenswrapper[4836]: I0217 14:26:59.766146 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:26:59 crc kubenswrapper[4836]: I0217 14:26:59.766236 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:26:59 crc kubenswrapper[4836]: I0217 14:26:59.767609 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"790067b54b3531952a7756a09b793da1fc53330ef71b8011e59f530ae444594e"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:26:59 crc kubenswrapper[4836]: I0217 14:26:59.767726 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://790067b54b3531952a7756a09b793da1fc53330ef71b8011e59f530ae444594e" gracePeriod=600 Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.143236 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h9gmq" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.180009 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ghk5k-config-tv4tb" podStartSLOduration=3.179973354 podStartE2EDuration="3.179973354s" podCreationTimestamp="2026-02-17 14:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:26:59.543119181 +0000 UTC m=+1245.886047470" watchObservedRunningTime="2026-02-17 14:27:00.179973354 +0000 UTC m=+1246.522901623" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.253664 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa6524b-2b3f-47c3-b55f-1435685df59d-operator-scripts\") pod \"caa6524b-2b3f-47c3-b55f-1435685df59d\" (UID: \"caa6524b-2b3f-47c3-b55f-1435685df59d\") " Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.253902 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5f4x\" (UniqueName: \"kubernetes.io/projected/caa6524b-2b3f-47c3-b55f-1435685df59d-kube-api-access-v5f4x\") pod \"caa6524b-2b3f-47c3-b55f-1435685df59d\" (UID: \"caa6524b-2b3f-47c3-b55f-1435685df59d\") " Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.254690 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa6524b-2b3f-47c3-b55f-1435685df59d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "caa6524b-2b3f-47c3-b55f-1435685df59d" (UID: "caa6524b-2b3f-47c3-b55f-1435685df59d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.254807 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa6524b-2b3f-47c3-b55f-1435685df59d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.261799 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa6524b-2b3f-47c3-b55f-1435685df59d-kube-api-access-v5f4x" (OuterVolumeSpecName: "kube-api-access-v5f4x") pod "caa6524b-2b3f-47c3-b55f-1435685df59d" (UID: "caa6524b-2b3f-47c3-b55f-1435685df59d"). InnerVolumeSpecName "kube-api-access-v5f4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.358194 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5f4x\" (UniqueName: \"kubernetes.io/projected/caa6524b-2b3f-47c3-b55f-1435685df59d-kube-api-access-v5f4x\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.471684 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.476625 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.710505 4836 generic.go:334] "Generic (PLEG): container finished" podID="fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" containerID="faf1f0c01e2ba58effda0101e73091532e490c7632b908240461cde1c4eacd7e" exitCode=0 Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.710737 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ghk5k-config-tv4tb" event={"ID":"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec","Type":"ContainerDied","Data":"faf1f0c01e2ba58effda0101e73091532e490c7632b908240461cde1c4eacd7e"} Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.718030 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h9gmq" event={"ID":"caa6524b-2b3f-47c3-b55f-1435685df59d","Type":"ContainerDied","Data":"4412cdd3236c16e7c55d72426203ad2b29aa25a446957f2189655406d782c8f6"} Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.718114 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4412cdd3236c16e7c55d72426203ad2b29aa25a446957f2189655406d782c8f6" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.718206 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h9gmq" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.735360 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"790067b54b3531952a7756a09b793da1fc53330ef71b8011e59f530ae444594e"} Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.735709 4836 scope.go:117] "RemoveContainer" containerID="89b78e4cc2264dc06417ab903dd2a1618c1aee2c1d950babae0b011a2e9eac59" Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.735297 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="790067b54b3531952a7756a09b793da1fc53330ef71b8011e59f530ae444594e" exitCode=0 Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.735952 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"3c09fe81ffce38e5d9ef4195d8e69df0edfb238c5a8b73cb36be460e79dea4bb"} Feb 17 14:27:00 crc kubenswrapper[4836]: I0217 14:27:00.738369 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:01 crc kubenswrapper[4836]: I0217 14:27:01.882982 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ghk5k" Feb 17 14:27:04 crc kubenswrapper[4836]: I0217 14:27:04.956519 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:27:04 crc kubenswrapper[4836]: I0217 14:27:04.958176 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="prometheus" containerID="cri-o://5de26698cc194f27aa6fa46281e03b3fa0bc2faa6bf0ef9b745f3fec33e05835" gracePeriod=600 Feb 17 14:27:04 crc kubenswrapper[4836]: I0217 14:27:04.958587 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="thanos-sidecar" containerID="cri-o://839af704fe28aeff5f1ab20ca6e7c7a0fb25790fc5bc232fe9131c132f8e0bf9" gracePeriod=600 Feb 17 14:27:04 crc kubenswrapper[4836]: I0217 14:27:04.958692 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="config-reloader" containerID="cri-o://2634435ab0e106f5ce9041eacdd8794376187c382228fa8d9f52a71bd9ec4553" gracePeriod=600 Feb 17 14:27:05 crc kubenswrapper[4836]: I0217 14:27:05.472142 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.114:9090/-/ready\": dial tcp 10.217.0.114:9090: connect: connection refused" Feb 17 14:27:05 crc kubenswrapper[4836]: I0217 14:27:05.933325 4836 generic.go:334] "Generic (PLEG): container finished" podID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerID="839af704fe28aeff5f1ab20ca6e7c7a0fb25790fc5bc232fe9131c132f8e0bf9" exitCode=0 Feb 17 14:27:05 crc kubenswrapper[4836]: I0217 14:27:05.933369 4836 generic.go:334] "Generic (PLEG): container finished" podID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerID="2634435ab0e106f5ce9041eacdd8794376187c382228fa8d9f52a71bd9ec4553" exitCode=0 Feb 17 14:27:05 crc kubenswrapper[4836]: I0217 14:27:05.933379 4836 generic.go:334] "Generic (PLEG): container finished" podID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerID="5de26698cc194f27aa6fa46281e03b3fa0bc2faa6bf0ef9b745f3fec33e05835" exitCode=0 Feb 17 14:27:05 crc kubenswrapper[4836]: I0217 14:27:05.933407 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerDied","Data":"839af704fe28aeff5f1ab20ca6e7c7a0fb25790fc5bc232fe9131c132f8e0bf9"} Feb 17 14:27:05 crc kubenswrapper[4836]: I0217 14:27:05.933459 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerDied","Data":"2634435ab0e106f5ce9041eacdd8794376187c382228fa8d9f52a71bd9ec4553"} Feb 17 14:27:05 crc kubenswrapper[4836]: I0217 14:27:05.933475 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerDied","Data":"5de26698cc194f27aa6fa46281e03b3fa0bc2faa6bf0ef9b745f3fec33e05835"} Feb 17 14:27:07 crc kubenswrapper[4836]: I0217 14:27:07.478982 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:27:07 crc kubenswrapper[4836]: I0217 14:27:07.490418 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e482046c-502a-4f41-b013-7b3ef1c71ee1-etc-swift\") pod \"swift-storage-0\" (UID: \"e482046c-502a-4f41-b013-7b3ef1c71ee1\") " pod="openstack/swift-storage-0" Feb 17 14:27:07 crc kubenswrapper[4836]: I0217 14:27:07.890593 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 14:27:07 crc kubenswrapper[4836]: I0217 14:27:07.902342 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="1c33fb01-9bf7-43f1-86d5-004e70d3721c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.074616 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.448830 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-w5qdk"] Feb 17 14:27:08 crc kubenswrapper[4836]: E0217 14:27:08.453879 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa6524b-2b3f-47c3-b55f-1435685df59d" containerName="mariadb-account-create-update" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.453999 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa6524b-2b3f-47c3-b55f-1435685df59d" containerName="mariadb-account-create-update" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.454522 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa6524b-2b3f-47c3-b55f-1435685df59d" containerName="mariadb-account-create-update" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.455784 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.474789 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w5qdk"] Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.865639 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb354e85-311d-40bb-ae4a-5c535d4d89b9-operator-scripts\") pod \"cinder-db-create-w5qdk\" (UID: \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\") " pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.874523 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65k4\" (UniqueName: \"kubernetes.io/projected/eb354e85-311d-40bb-ae4a-5c535d4d89b9-kube-api-access-q65k4\") pod \"cinder-db-create-w5qdk\" (UID: \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\") " pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.959972 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0d11-account-create-update-jf72z"] Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.962215 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.974491 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0d11-account-create-update-jf72z"] Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.977588 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q65k4\" (UniqueName: \"kubernetes.io/projected/eb354e85-311d-40bb-ae4a-5c535d4d89b9-kube-api-access-q65k4\") pod \"cinder-db-create-w5qdk\" (UID: \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\") " pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.977875 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwcxc\" (UniqueName: \"kubernetes.io/projected/f9ee15e8-6695-454f-83ad-d54176458497-kube-api-access-mwcxc\") pod \"cinder-0d11-account-create-update-jf72z\" (UID: \"f9ee15e8-6695-454f-83ad-d54176458497\") " pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.978029 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ee15e8-6695-454f-83ad-d54176458497-operator-scripts\") pod \"cinder-0d11-account-create-update-jf72z\" (UID: \"f9ee15e8-6695-454f-83ad-d54176458497\") " pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.978149 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb354e85-311d-40bb-ae4a-5c535d4d89b9-operator-scripts\") pod \"cinder-db-create-w5qdk\" (UID: \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\") " pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.980229 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb354e85-311d-40bb-ae4a-5c535d4d89b9-operator-scripts\") pod \"cinder-db-create-w5qdk\" (UID: \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\") " pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:08 crc kubenswrapper[4836]: I0217 14:27:08.984035 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.080493 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwcxc\" (UniqueName: \"kubernetes.io/projected/f9ee15e8-6695-454f-83ad-d54176458497-kube-api-access-mwcxc\") pod \"cinder-0d11-account-create-update-jf72z\" (UID: \"f9ee15e8-6695-454f-83ad-d54176458497\") " pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.080596 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ee15e8-6695-454f-83ad-d54176458497-operator-scripts\") pod \"cinder-0d11-account-create-update-jf72z\" (UID: \"f9ee15e8-6695-454f-83ad-d54176458497\") " pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.081773 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ee15e8-6695-454f-83ad-d54176458497-operator-scripts\") pod \"cinder-0d11-account-create-update-jf72z\" (UID: \"f9ee15e8-6695-454f-83ad-d54176458497\") " pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.264337 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwcxc\" (UniqueName: \"kubernetes.io/projected/f9ee15e8-6695-454f-83ad-d54176458497-kube-api-access-mwcxc\") pod \"cinder-0d11-account-create-update-jf72z\" (UID: \"f9ee15e8-6695-454f-83ad-d54176458497\") " pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.264402 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q65k4\" (UniqueName: \"kubernetes.io/projected/eb354e85-311d-40bb-ae4a-5c535d4d89b9-kube-api-access-q65k4\") pod \"cinder-db-create-w5qdk\" (UID: \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\") " pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.292581 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.406211 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.457040 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-2ea0-account-create-update-p7p99"] Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.459322 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.464273 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.479009 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-jjrp2"] Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.481025 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.493612 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-operator-scripts\") pod \"cloudkitty-db-create-jjrp2\" (UID: \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\") " pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.494154 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ee1a0f2-86df-4f97-957a-22bbd7da4505-operator-scripts\") pod \"cloudkitty-2ea0-account-create-update-p7p99\" (UID: \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\") " pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.494482 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zfw7\" (UniqueName: \"kubernetes.io/projected/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-kube-api-access-6zfw7\") pod \"cloudkitty-db-create-jjrp2\" (UID: \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\") " pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.504262 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7d6c\" (UniqueName: \"kubernetes.io/projected/2ee1a0f2-86df-4f97-957a-22bbd7da4505-kube-api-access-j7d6c\") pod \"cloudkitty-2ea0-account-create-update-p7p99\" (UID: \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\") " pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.498934 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-jjrp2"] Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.610397 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-69hk6"] Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.610483 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zfw7\" (UniqueName: \"kubernetes.io/projected/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-kube-api-access-6zfw7\") pod \"cloudkitty-db-create-jjrp2\" (UID: \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\") " pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.611158 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7d6c\" (UniqueName: \"kubernetes.io/projected/2ee1a0f2-86df-4f97-957a-22bbd7da4505-kube-api-access-j7d6c\") pod \"cloudkitty-2ea0-account-create-update-p7p99\" (UID: \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\") " pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.611491 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-operator-scripts\") pod \"cloudkitty-db-create-jjrp2\" (UID: \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\") " pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.611534 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ee1a0f2-86df-4f97-957a-22bbd7da4505-operator-scripts\") pod \"cloudkitty-2ea0-account-create-update-p7p99\" (UID: \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\") " pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.613254 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ee1a0f2-86df-4f97-957a-22bbd7da4505-operator-scripts\") pod \"cloudkitty-2ea0-account-create-update-p7p99\" (UID: \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\") " pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.613247 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-operator-scripts\") pod \"cloudkitty-db-create-jjrp2\" (UID: \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\") " pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:09 crc kubenswrapper[4836]: I0217 14:27:09.613498 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.042973 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-operator-scripts\") pod \"barbican-db-create-69hk6\" (UID: \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\") " pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.045119 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6jm\" (UniqueName: \"kubernetes.io/projected/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-kube-api-access-zm6jm\") pod \"barbican-db-create-69hk6\" (UID: \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\") " pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.051142 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-2ea0-account-create-update-p7p99"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.071330 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-q25rr"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.073153 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.088095 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-69hk6"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.103079 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-q25rr"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.103195 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-652c-account-create-update-lswdv"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.105023 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.105324 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s87v5" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.105492 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.107776 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.108965 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.111529 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7d6c\" (UniqueName: \"kubernetes.io/projected/2ee1a0f2-86df-4f97-957a-22bbd7da4505-kube-api-access-j7d6c\") pod \"cloudkitty-2ea0-account-create-update-p7p99\" (UID: \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\") " pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.123066 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.159923 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-combined-ca-bundle\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.160030 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-operator-scripts\") pod \"barbican-db-create-69hk6\" (UID: \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\") " pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.160095 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqspw\" (UniqueName: \"kubernetes.io/projected/767841a7-db94-430a-b408-10e5bd0350e5-kube-api-access-bqspw\") pod \"barbican-652c-account-create-update-lswdv\" (UID: \"767841a7-db94-430a-b408-10e5bd0350e5\") " pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.160171 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njnhf\" (UniqueName: \"kubernetes.io/projected/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-kube-api-access-njnhf\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.160220 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6jm\" (UniqueName: \"kubernetes.io/projected/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-kube-api-access-zm6jm\") pod \"barbican-db-create-69hk6\" (UID: \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\") " pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.160265 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/767841a7-db94-430a-b408-10e5bd0350e5-operator-scripts\") pod \"barbican-652c-account-create-update-lswdv\" (UID: \"767841a7-db94-430a-b408-10e5bd0350e5\") " pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.160321 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-config-data\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.161408 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-operator-scripts\") pod \"barbican-db-create-69hk6\" (UID: \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\") " pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.164594 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-652c-account-create-update-lswdv"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.197552 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zfw7\" (UniqueName: \"kubernetes.io/projected/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-kube-api-access-6zfw7\") pod \"cloudkitty-db-create-jjrp2\" (UID: \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\") " pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.209337 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6jm\" (UniqueName: \"kubernetes.io/projected/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-kube-api-access-zm6jm\") pod \"barbican-db-create-69hk6\" (UID: \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\") " pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.234453 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nwjd8"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.237196 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.253601 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nwjd8"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.264669 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-operator-scripts\") pod \"neutron-db-create-nwjd8\" (UID: \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\") " pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.264951 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/767841a7-db94-430a-b408-10e5bd0350e5-operator-scripts\") pod \"barbican-652c-account-create-update-lswdv\" (UID: \"767841a7-db94-430a-b408-10e5bd0350e5\") " pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.265092 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-config-data\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.265233 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-combined-ca-bundle\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.265448 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2xc5\" (UniqueName: \"kubernetes.io/projected/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-kube-api-access-h2xc5\") pod \"neutron-db-create-nwjd8\" (UID: \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\") " pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.265588 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqspw\" (UniqueName: \"kubernetes.io/projected/767841a7-db94-430a-b408-10e5bd0350e5-kube-api-access-bqspw\") pod \"barbican-652c-account-create-update-lswdv\" (UID: \"767841a7-db94-430a-b408-10e5bd0350e5\") " pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.265829 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njnhf\" (UniqueName: \"kubernetes.io/projected/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-kube-api-access-njnhf\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.266337 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/767841a7-db94-430a-b408-10e5bd0350e5-operator-scripts\") pod \"barbican-652c-account-create-update-lswdv\" (UID: \"767841a7-db94-430a-b408-10e5bd0350e5\") " pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.270759 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-combined-ca-bundle\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.289882 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-14cb-account-create-update-xw2dd"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.292193 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.300759 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-config-data\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.301141 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.304570 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njnhf\" (UniqueName: \"kubernetes.io/projected/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-kube-api-access-njnhf\") pod \"keystone-db-sync-q25rr\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.308205 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqspw\" (UniqueName: \"kubernetes.io/projected/767841a7-db94-430a-b408-10e5bd0350e5-kube-api-access-bqspw\") pod \"barbican-652c-account-create-update-lswdv\" (UID: \"767841a7-db94-430a-b408-10e5bd0350e5\") " pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.322533 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-14cb-account-create-update-xw2dd"] Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.368068 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-operator-scripts\") pod \"neutron-db-create-nwjd8\" (UID: \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\") " pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.368167 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpsh\" (UniqueName: \"kubernetes.io/projected/623225aa-2492-494e-be5b-92acef6f23cf-kube-api-access-sxpsh\") pod \"neutron-14cb-account-create-update-xw2dd\" (UID: \"623225aa-2492-494e-be5b-92acef6f23cf\") " pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.368264 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2xc5\" (UniqueName: \"kubernetes.io/projected/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-kube-api-access-h2xc5\") pod \"neutron-db-create-nwjd8\" (UID: \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\") " pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.368328 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623225aa-2492-494e-be5b-92acef6f23cf-operator-scripts\") pod \"neutron-14cb-account-create-update-xw2dd\" (UID: \"623225aa-2492-494e-be5b-92acef6f23cf\") " pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.369402 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-operator-scripts\") pod \"neutron-db-create-nwjd8\" (UID: \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\") " pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.373676 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.387923 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.393581 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2xc5\" (UniqueName: \"kubernetes.io/projected/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-kube-api-access-h2xc5\") pod \"neutron-db-create-nwjd8\" (UID: \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\") " pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.439367 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.470732 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623225aa-2492-494e-be5b-92acef6f23cf-operator-scripts\") pod \"neutron-14cb-account-create-update-xw2dd\" (UID: \"623225aa-2492-494e-be5b-92acef6f23cf\") " pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.470904 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxpsh\" (UniqueName: \"kubernetes.io/projected/623225aa-2492-494e-be5b-92acef6f23cf-kube-api-access-sxpsh\") pod \"neutron-14cb-account-create-update-xw2dd\" (UID: \"623225aa-2492-494e-be5b-92acef6f23cf\") " pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.471979 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623225aa-2492-494e-be5b-92acef6f23cf-operator-scripts\") pod \"neutron-14cb-account-create-update-xw2dd\" (UID: \"623225aa-2492-494e-be5b-92acef6f23cf\") " pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.476143 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.114:9090/-/ready\": dial tcp 10.217.0.114:9090: connect: connection refused" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.492382 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxpsh\" (UniqueName: \"kubernetes.io/projected/623225aa-2492-494e-be5b-92acef6f23cf-kube-api-access-sxpsh\") pod \"neutron-14cb-account-create-update-xw2dd\" (UID: \"623225aa-2492-494e-be5b-92acef6f23cf\") " pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.494608 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.551800 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.601634 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:10 crc kubenswrapper[4836]: I0217 14:27:10.703237 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:15 crc kubenswrapper[4836]: I0217 14:27:15.472655 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.114:9090/-/ready\": dial tcp 10.217.0.114:9090: connect: connection refused" Feb 17 14:27:15 crc kubenswrapper[4836]: I0217 14:27:15.473295 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:16 crc kubenswrapper[4836]: E0217 14:27:16.760531 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 17 14:27:16 crc kubenswrapper[4836]: E0217 14:27:16.760854 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grffb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-z8g7x_openstack(df3a6cf1-bca0-45b2-9f7c-6d483452d49d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:27:16 crc kubenswrapper[4836]: E0217 14:27:16.762059 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-z8g7x" podUID="df3a6cf1-bca0-45b2-9f7c-6d483452d49d" Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.835362 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.963131 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz7ht\" (UniqueName: \"kubernetes.io/projected/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-kube-api-access-xz7ht\") pod \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.963374 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-log-ovn\") pod \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.963442 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run-ovn\") pod \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.963585 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run\") pod \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.963676 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-scripts\") pod \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.963730 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-additional-scripts\") pod \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\" (UID: \"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec\") " Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.964522 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" (UID: "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.964582 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run" (OuterVolumeSpecName: "var-run") pod "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" (UID: "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.964796 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" (UID: "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.965872 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-scripts" (OuterVolumeSpecName: "scripts") pod "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" (UID: "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.966815 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" (UID: "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:16 crc kubenswrapper[4836]: I0217 14:27:16.970624 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-kube-api-access-xz7ht" (OuterVolumeSpecName: "kube-api-access-xz7ht") pod "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" (UID: "fb9f3dbb-ea37-4057-97c1-b93cbb39aaec"). InnerVolumeSpecName "kube-api-access-xz7ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.171823 4836 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.171879 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.172862 4836 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.172886 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz7ht\" (UniqueName: \"kubernetes.io/projected/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-kube-api-access-xz7ht\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.172897 4836 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.172909 4836 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.375464 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ghk5k-config-tv4tb" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.375604 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ghk5k-config-tv4tb" event={"ID":"fb9f3dbb-ea37-4057-97c1-b93cbb39aaec","Type":"ContainerDied","Data":"c2bde1d5ba9adc61b0950993bd532e188bd3fc6a817496d9051110a296f9c5b2"} Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.376370 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2bde1d5ba9adc61b0950993bd532e188bd3fc6a817496d9051110a296f9c5b2" Feb 17 14:27:17 crc kubenswrapper[4836]: E0217 14:27:17.376483 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-z8g7x" podUID="df3a6cf1-bca0-45b2-9f7c-6d483452d49d" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.377101 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424441 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-thanos-prometheus-http-client-file\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424639 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424698 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-tls-assets\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424728 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8z8l\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-kube-api-access-t8z8l\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424768 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-web-config\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424796 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-2\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424821 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config-out\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424859 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424910 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-0\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.424964 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-1\") pod \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\" (UID: \"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0\") " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.426041 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.426082 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.427600 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.435244 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config-out" (OuterVolumeSpecName: "config-out") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.435339 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.435455 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.445619 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config" (OuterVolumeSpecName: "config") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.451399 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-kube-api-access-t8z8l" (OuterVolumeSpecName: "kube-api-access-t8z8l") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "kube-api-access-t8z8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.487989 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "pvc-93f26e02-6577-44e5-880e-5ede6b185735". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.498230 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-web-config" (OuterVolumeSpecName: "web-config") pod "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" (UID: "e29295a6-4250-4a7d-84c5-a8b3ddf96bd0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.526881 4836 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527471 4836 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") on node \"crc\" " Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527500 4836 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527517 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8z8l\" (UniqueName: \"kubernetes.io/projected/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-kube-api-access-t8z8l\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527533 4836 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527545 4836 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527557 4836 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527573 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527585 4836 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.527597 4836 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.553681 4836 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.553981 4836 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-93f26e02-6577-44e5-880e-5ede6b185735" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735") on node "crc" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.632553 4836 reconciler_common.go:293] "Volume detached for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:17 crc kubenswrapper[4836]: E0217 14:27:17.815743 4836 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb9f3dbb_ea37_4057_97c1_b93cbb39aaec.slice/crio-c2bde1d5ba9adc61b0950993bd532e188bd3fc6a817496d9051110a296f9c5b2\": RecentStats: unable to find data in memory cache]" Feb 17 14:27:17 crc kubenswrapper[4836]: I0217 14:27:17.838687 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.004775 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ghk5k-config-tv4tb"] Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.341513 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ghk5k-config-tv4tb"] Feb 17 14:27:18 crc kubenswrapper[4836]: W0217 14:27:18.414474 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4ce1c7a_57e8_491e_84ab_8aed8baea37b.slice/crio-c2e28b74f07f40dbbfa11b857fc086ce56671a9ec1d5525b52424bc04b43fc61 WatchSource:0}: Error finding container c2e28b74f07f40dbbfa11b857fc086ce56671a9ec1d5525b52424bc04b43fc61: Status 404 returned error can't find the container with id c2e28b74f07f40dbbfa11b857fc086ce56671a9ec1d5525b52424bc04b43fc61 Feb 17 14:27:18 crc kubenswrapper[4836]: W0217 14:27:18.418863 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod623225aa_2492_494e_be5b_92acef6f23cf.slice/crio-f1efd21934a9bbf27db4f3919668653a876c21cef3f9f7e29e0c0ecf147efc4b WatchSource:0}: Error finding container f1efd21934a9bbf27db4f3919668653a876c21cef3f9f7e29e0c0ecf147efc4b: Status 404 returned error can't find the container with id f1efd21934a9bbf27db4f3919668653a876c21cef3f9f7e29e0c0ecf147efc4b Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.423056 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-14cb-account-create-update-xw2dd"] Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.425278 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e29295a6-4250-4a7d-84c5-a8b3ddf96bd0","Type":"ContainerDied","Data":"cefd70541e5e6c57648aaec13bc3ac8008ad32d2cca2fd2d95d8a18012223fb3"} Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.425380 4836 scope.go:117] "RemoveContainer" containerID="839af704fe28aeff5f1ab20ca6e7c7a0fb25790fc5bc232fe9131c132f8e0bf9" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.425600 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.439744 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.445181 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nwjd8"] Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.549469 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.607580 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" path="/var/lib/kubelet/pods/fb9f3dbb-ea37-4057-97c1-b93cbb39aaec/volumes" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.610089 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.610149 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:27:18 crc kubenswrapper[4836]: E0217 14:27:18.612479 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="config-reloader" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.612516 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="config-reloader" Feb 17 14:27:18 crc kubenswrapper[4836]: E0217 14:27:18.612538 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="prometheus" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.612546 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="prometheus" Feb 17 14:27:18 crc kubenswrapper[4836]: E0217 14:27:18.612574 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" containerName="ovn-config" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.612583 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" containerName="ovn-config" Feb 17 14:27:18 crc kubenswrapper[4836]: E0217 14:27:18.612759 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="init-config-reloader" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.612777 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="init-config-reloader" Feb 17 14:27:18 crc kubenswrapper[4836]: E0217 14:27:18.612792 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="thanos-sidecar" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.612801 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="thanos-sidecar" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.613171 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="config-reloader" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.613198 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="thanos-sidecar" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.613222 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" containerName="prometheus" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.613238 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9f3dbb-ea37-4057-97c1-b93cbb39aaec" containerName="ovn-config" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.613354 4836 scope.go:117] "RemoveContainer" containerID="2634435ab0e106f5ce9041eacdd8794376187c382228fa8d9f52a71bd9ec4553" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.616326 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.627914 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.627922 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.628512 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.628559 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.628804 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-x7d2x" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.629043 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.629185 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.630651 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.633152 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.634043 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.643654 4836 scope.go:117] "RemoveContainer" containerID="5de26698cc194f27aa6fa46281e03b3fa0bc2faa6bf0ef9b745f3fec33e05835" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.734705 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6fec8667-7189-4e29-8362-37dd935d2db7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.736490 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6fec8667-7189-4e29-8362-37dd935d2db7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.736611 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.736664 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.736731 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmvdd\" (UniqueName: \"kubernetes.io/projected/6fec8667-7189-4e29-8362-37dd935d2db7-kube-api-access-lmvdd\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.736758 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.736969 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.737032 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.737058 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.737154 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-config\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.737718 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.737793 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.737834 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.754452 4836 scope.go:117] "RemoveContainer" containerID="1aeb38549c5093ddcbd19fe025e8df306afcc08ba355a33bcd16537686f0d989" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.839777 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.839846 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.839950 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6fec8667-7189-4e29-8362-37dd935d2db7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840002 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6fec8667-7189-4e29-8362-37dd935d2db7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840043 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840077 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840118 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmvdd\" (UniqueName: \"kubernetes.io/projected/6fec8667-7189-4e29-8362-37dd935d2db7-kube-api-access-lmvdd\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840146 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840207 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840235 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840277 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840341 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-config\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.840360 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.845352 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.845752 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.847029 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6fec8667-7189-4e29-8362-37dd935d2db7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.849030 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.850983 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.853431 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.855030 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6fec8667-7189-4e29-8362-37dd935d2db7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.856133 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.857379 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.857492 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94da064c7e93eda9403c837c8900dc0ec43041d0305170815d7b87148c388206/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.859520 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6fec8667-7189-4e29-8362-37dd935d2db7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.864248 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.872865 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fec8667-7189-4e29-8362-37dd935d2db7-config\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.889511 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmvdd\" (UniqueName: \"kubernetes.io/projected/6fec8667-7189-4e29-8362-37dd935d2db7-kube-api-access-lmvdd\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.922467 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 14:27:18 crc kubenswrapper[4836]: W0217 14:27:18.932076 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9ee15e8_6695_454f_83ad_d54176458497.slice/crio-f0bdbf6a269741552be971089824b352c183b18e469724857214ea40b421f6ba WatchSource:0}: Error finding container f0bdbf6a269741552be971089824b352c183b18e469724857214ea40b421f6ba: Status 404 returned error can't find the container with id f0bdbf6a269741552be971089824b352c183b18e469724857214ea40b421f6ba Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.932433 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-2ea0-account-create-update-p7p99"] Feb 17 14:27:18 crc kubenswrapper[4836]: I0217 14:27:18.978397 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-93f26e02-6577-44e5-880e-5ede6b185735\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93f26e02-6577-44e5-880e-5ede6b185735\") pod \"prometheus-metric-storage-0\" (UID: \"6fec8667-7189-4e29-8362-37dd935d2db7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:18 crc kubenswrapper[4836]: W0217 14:27:18.997092 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1fe36f3_d6b6_44e0_b85b_6def754fd08e.slice/crio-17df418fd624b8478184e0378c99426f8f65d89a01b5f290b6361a3a1e8ae4b1 WatchSource:0}: Error finding container 17df418fd624b8478184e0378c99426f8f65d89a01b5f290b6361a3a1e8ae4b1: Status 404 returned error can't find the container with id 17df418fd624b8478184e0378c99426f8f65d89a01b5f290b6361a3a1e8ae4b1 Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.013132 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.013397 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.017608 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-652c-account-create-update-lswdv"] Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.030801 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 14:27:19 crc kubenswrapper[4836]: W0217 14:27:19.031963 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a1d4ef8_03d9_42d8_ae0b_9410767ed25f.slice/crio-d0ca56828b8b775526488a0852f84f24a4af21e12c68a30c67a55c19e97b65de WatchSource:0}: Error finding container d0ca56828b8b775526488a0852f84f24a4af21e12c68a30c67a55c19e97b65de: Status 404 returned error can't find the container with id d0ca56828b8b775526488a0852f84f24a4af21e12c68a30c67a55c19e97b65de Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.069756 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0d11-account-create-update-jf72z"] Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.110639 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-jjrp2"] Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.124838 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-69hk6"] Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.144163 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-q25rr"] Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.154365 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w5qdk"] Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.212078 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 14:27:19 crc kubenswrapper[4836]: W0217 14:27:19.317180 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode482046c_502a_4f41_b013_7b3ef1c71ee1.slice/crio-bdd974c7e983ba9188d62382b19b6a16428ca529e476c2ae048d286d85f2cce3 WatchSource:0}: Error finding container bdd974c7e983ba9188d62382b19b6a16428ca529e476c2ae048d286d85f2cce3: Status 404 returned error can't find the container with id bdd974c7e983ba9188d62382b19b6a16428ca529e476c2ae048d286d85f2cce3 Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.651625 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" event={"ID":"2ee1a0f2-86df-4f97-957a-22bbd7da4505","Type":"ContainerStarted","Data":"fbea486a2a4fe13c5e5757c175c46cee2b4c46bf0588a5aa9f5c9b50a17e6502"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.663747 4836 generic.go:334] "Generic (PLEG): container finished" podID="623225aa-2492-494e-be5b-92acef6f23cf" containerID="b3fd8198bda32089f8d16c7005023bc9355442a69582a28217b4faa19a58edfd" exitCode=0 Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.664024 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-14cb-account-create-update-xw2dd" event={"ID":"623225aa-2492-494e-be5b-92acef6f23cf","Type":"ContainerDied","Data":"b3fd8198bda32089f8d16c7005023bc9355442a69582a28217b4faa19a58edfd"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.664115 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-14cb-account-create-update-xw2dd" event={"ID":"623225aa-2492-494e-be5b-92acef6f23cf","Type":"ContainerStarted","Data":"f1efd21934a9bbf27db4f3919668653a876c21cef3f9f7e29e0c0ecf147efc4b"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.689776 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"bdd974c7e983ba9188d62382b19b6a16428ca529e476c2ae048d286d85f2cce3"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.702039 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-jjrp2" event={"ID":"a1fe36f3-d6b6-44e0-b85b-6def754fd08e","Type":"ContainerStarted","Data":"17df418fd624b8478184e0378c99426f8f65d89a01b5f290b6361a3a1e8ae4b1"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.707762 4836 generic.go:334] "Generic (PLEG): container finished" podID="d4ce1c7a-57e8-491e-84ab-8aed8baea37b" containerID="7e6f04d96e5a077df5020259f367870723b0f91e790c0b81e936bf2cbc3790f9" exitCode=0 Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.707864 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nwjd8" event={"ID":"d4ce1c7a-57e8-491e-84ab-8aed8baea37b","Type":"ContainerDied","Data":"7e6f04d96e5a077df5020259f367870723b0f91e790c0b81e936bf2cbc3790f9"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.707906 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nwjd8" event={"ID":"d4ce1c7a-57e8-491e-84ab-8aed8baea37b","Type":"ContainerStarted","Data":"c2e28b74f07f40dbbfa11b857fc086ce56671a9ec1d5525b52424bc04b43fc61"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.739657 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q25rr" event={"ID":"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f","Type":"ContainerStarted","Data":"d0ca56828b8b775526488a0852f84f24a4af21e12c68a30c67a55c19e97b65de"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.759763 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d11-account-create-update-jf72z" event={"ID":"f9ee15e8-6695-454f-83ad-d54176458497","Type":"ContainerStarted","Data":"f0bdbf6a269741552be971089824b352c183b18e469724857214ea40b421f6ba"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.785160 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w5qdk" event={"ID":"eb354e85-311d-40bb-ae4a-5c535d4d89b9","Type":"ContainerStarted","Data":"e496c31a00d09e5ed74cd58d9920320d6ca8639e2eda3e165b272a2eff9d6bd6"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.786956 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-69hk6" event={"ID":"4edeb89f-0bd9-466e-a9f9-2d45575d2c72","Type":"ContainerStarted","Data":"d3d6bb45c56fb523eb76b17cc800f28e1531da5278e29bfe6d07de89f3199e47"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.804148 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-652c-account-create-update-lswdv" event={"ID":"767841a7-db94-430a-b408-10e5bd0350e5","Type":"ContainerStarted","Data":"7e8ac5cbf4b170d941ee6315c12ea589e4cbab28df2a33a941f9c1feb21af48e"} Feb 17 14:27:19 crc kubenswrapper[4836]: I0217 14:27:19.816241 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.589686 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29295a6-4250-4a7d-84c5-a8b3ddf96bd0" path="/var/lib/kubelet/pods/e29295a6-4250-4a7d-84c5-a8b3ddf96bd0/volumes" Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.836490 4836 generic.go:334] "Generic (PLEG): container finished" podID="4edeb89f-0bd9-466e-a9f9-2d45575d2c72" containerID="3ae7c112e0518db5ada6508ad8c57217e914b3d3401ff927d4aa18b2e2dd9f79" exitCode=0 Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.837811 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-69hk6" event={"ID":"4edeb89f-0bd9-466e-a9f9-2d45575d2c72","Type":"ContainerDied","Data":"3ae7c112e0518db5ada6508ad8c57217e914b3d3401ff927d4aa18b2e2dd9f79"} Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.843545 4836 generic.go:334] "Generic (PLEG): container finished" podID="a1fe36f3-d6b6-44e0-b85b-6def754fd08e" containerID="86d009aabc2aafe94768037f28b03b96d85141a639669b82cdbd2fa653d9696d" exitCode=0 Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.843677 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-jjrp2" event={"ID":"a1fe36f3-d6b6-44e0-b85b-6def754fd08e","Type":"ContainerDied","Data":"86d009aabc2aafe94768037f28b03b96d85141a639669b82cdbd2fa653d9696d"} Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.846832 4836 generic.go:334] "Generic (PLEG): container finished" podID="767841a7-db94-430a-b408-10e5bd0350e5" containerID="5e36e16a50074efc0038c12585afeefa45bc968423f053fecc01a7a460fc9fd3" exitCode=0 Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.847039 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-652c-account-create-update-lswdv" event={"ID":"767841a7-db94-430a-b408-10e5bd0350e5","Type":"ContainerDied","Data":"5e36e16a50074efc0038c12585afeefa45bc968423f053fecc01a7a460fc9fd3"} Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.850020 4836 generic.go:334] "Generic (PLEG): container finished" podID="2ee1a0f2-86df-4f97-957a-22bbd7da4505" containerID="e3b5cb6d26fdb2e586683ff31b8abe63df8d533a376c42dd280747ab5e165f5e" exitCode=0 Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.850115 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" event={"ID":"2ee1a0f2-86df-4f97-957a-22bbd7da4505","Type":"ContainerDied","Data":"e3b5cb6d26fdb2e586683ff31b8abe63df8d533a376c42dd280747ab5e165f5e"} Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.852414 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6fec8667-7189-4e29-8362-37dd935d2db7","Type":"ContainerStarted","Data":"bb66f128cce682cdff9affc9b52d41d1a5e4fb7196fde6c011efbd2fe8f4b847"} Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.863257 4836 generic.go:334] "Generic (PLEG): container finished" podID="f9ee15e8-6695-454f-83ad-d54176458497" containerID="7f08e0024064e8fd1c473afb57d745eb10366b72696b8824621db71657c54472" exitCode=0 Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.863557 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d11-account-create-update-jf72z" event={"ID":"f9ee15e8-6695-454f-83ad-d54176458497","Type":"ContainerDied","Data":"7f08e0024064e8fd1c473afb57d745eb10366b72696b8824621db71657c54472"} Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.877527 4836 generic.go:334] "Generic (PLEG): container finished" podID="eb354e85-311d-40bb-ae4a-5c535d4d89b9" containerID="0112cdba6fc4f4acf8102f48cb77deaeb49a0b5c8b49e3c6adcdb559d7e100b6" exitCode=0 Feb 17 14:27:20 crc kubenswrapper[4836]: I0217 14:27:20.877843 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w5qdk" event={"ID":"eb354e85-311d-40bb-ae4a-5c535d4d89b9","Type":"ContainerDied","Data":"0112cdba6fc4f4acf8102f48cb77deaeb49a0b5c8b49e3c6adcdb559d7e100b6"} Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.521362 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.534215 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.624729 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxpsh\" (UniqueName: \"kubernetes.io/projected/623225aa-2492-494e-be5b-92acef6f23cf-kube-api-access-sxpsh\") pod \"623225aa-2492-494e-be5b-92acef6f23cf\" (UID: \"623225aa-2492-494e-be5b-92acef6f23cf\") " Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.625013 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623225aa-2492-494e-be5b-92acef6f23cf-operator-scripts\") pod \"623225aa-2492-494e-be5b-92acef6f23cf\" (UID: \"623225aa-2492-494e-be5b-92acef6f23cf\") " Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.625650 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623225aa-2492-494e-be5b-92acef6f23cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "623225aa-2492-494e-be5b-92acef6f23cf" (UID: "623225aa-2492-494e-be5b-92acef6f23cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.626046 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623225aa-2492-494e-be5b-92acef6f23cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.630684 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623225aa-2492-494e-be5b-92acef6f23cf-kube-api-access-sxpsh" (OuterVolumeSpecName: "kube-api-access-sxpsh") pod "623225aa-2492-494e-be5b-92acef6f23cf" (UID: "623225aa-2492-494e-be5b-92acef6f23cf"). InnerVolumeSpecName "kube-api-access-sxpsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.727535 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-operator-scripts\") pod \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\" (UID: \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\") " Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.727680 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2xc5\" (UniqueName: \"kubernetes.io/projected/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-kube-api-access-h2xc5\") pod \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\" (UID: \"d4ce1c7a-57e8-491e-84ab-8aed8baea37b\") " Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.728150 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxpsh\" (UniqueName: \"kubernetes.io/projected/623225aa-2492-494e-be5b-92acef6f23cf-kube-api-access-sxpsh\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.731512 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4ce1c7a-57e8-491e-84ab-8aed8baea37b" (UID: "d4ce1c7a-57e8-491e-84ab-8aed8baea37b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.734078 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-kube-api-access-h2xc5" (OuterVolumeSpecName: "kube-api-access-h2xc5") pod "d4ce1c7a-57e8-491e-84ab-8aed8baea37b" (UID: "d4ce1c7a-57e8-491e-84ab-8aed8baea37b"). InnerVolumeSpecName "kube-api-access-h2xc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.830995 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.831055 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2xc5\" (UniqueName: \"kubernetes.io/projected/d4ce1c7a-57e8-491e-84ab-8aed8baea37b-kube-api-access-h2xc5\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.893896 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nwjd8" event={"ID":"d4ce1c7a-57e8-491e-84ab-8aed8baea37b","Type":"ContainerDied","Data":"c2e28b74f07f40dbbfa11b857fc086ce56671a9ec1d5525b52424bc04b43fc61"} Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.893952 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2e28b74f07f40dbbfa11b857fc086ce56671a9ec1d5525b52424bc04b43fc61" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.894025 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nwjd8" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.895558 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-14cb-account-create-update-xw2dd" event={"ID":"623225aa-2492-494e-be5b-92acef6f23cf","Type":"ContainerDied","Data":"f1efd21934a9bbf27db4f3919668653a876c21cef3f9f7e29e0c0ecf147efc4b"} Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.895583 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1efd21934a9bbf27db4f3919668653a876c21cef3f9f7e29e0c0ecf147efc4b" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.895659 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-14cb-account-create-update-xw2dd" Feb 17 14:27:21 crc kubenswrapper[4836]: I0217 14:27:21.904325 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"a711cf2d892c351da83705eaa0cc64eb3b3425e6beec5a7dae46e099c405eacd"} Feb 17 14:27:23 crc kubenswrapper[4836]: I0217 14:27:23.929724 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"6589c0845f92c6a5bdd2d7f0de1decf41fd63691cfcde22131a6bea30b14f06a"} Feb 17 14:27:24 crc kubenswrapper[4836]: I0217 14:27:24.955495 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6fec8667-7189-4e29-8362-37dd935d2db7","Type":"ContainerStarted","Data":"a82e37c7eb14ee548654e466a1de02d0ef7f18f1bf7fd37d772effc7cc961f91"} Feb 17 14:27:27 crc kubenswrapper[4836]: I0217 14:27:27.992781 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-jjrp2" event={"ID":"a1fe36f3-d6b6-44e0-b85b-6def754fd08e","Type":"ContainerDied","Data":"17df418fd624b8478184e0378c99426f8f65d89a01b5f290b6361a3a1e8ae4b1"} Feb 17 14:27:27 crc kubenswrapper[4836]: I0217 14:27:27.993791 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17df418fd624b8478184e0378c99426f8f65d89a01b5f290b6361a3a1e8ae4b1" Feb 17 14:27:27 crc kubenswrapper[4836]: I0217 14:27:27.995714 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-652c-account-create-update-lswdv" event={"ID":"767841a7-db94-430a-b408-10e5bd0350e5","Type":"ContainerDied","Data":"7e8ac5cbf4b170d941ee6315c12ea589e4cbab28df2a33a941f9c1feb21af48e"} Feb 17 14:27:27 crc kubenswrapper[4836]: I0217 14:27:27.995746 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8ac5cbf4b170d941ee6315c12ea589e4cbab28df2a33a941f9c1feb21af48e" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.000006 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" event={"ID":"2ee1a0f2-86df-4f97-957a-22bbd7da4505","Type":"ContainerDied","Data":"fbea486a2a4fe13c5e5757c175c46cee2b4c46bf0588a5aa9f5c9b50a17e6502"} Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.000038 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbea486a2a4fe13c5e5757c175c46cee2b4c46bf0588a5aa9f5c9b50a17e6502" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.001478 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d11-account-create-update-jf72z" event={"ID":"f9ee15e8-6695-454f-83ad-d54176458497","Type":"ContainerDied","Data":"f0bdbf6a269741552be971089824b352c183b18e469724857214ea40b421f6ba"} Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.001509 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0bdbf6a269741552be971089824b352c183b18e469724857214ea40b421f6ba" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.003332 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w5qdk" event={"ID":"eb354e85-311d-40bb-ae4a-5c535d4d89b9","Type":"ContainerDied","Data":"e496c31a00d09e5ed74cd58d9920320d6ca8639e2eda3e165b272a2eff9d6bd6"} Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.003367 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e496c31a00d09e5ed74cd58d9920320d6ca8639e2eda3e165b272a2eff9d6bd6" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.005018 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-69hk6" event={"ID":"4edeb89f-0bd9-466e-a9f9-2d45575d2c72","Type":"ContainerDied","Data":"d3d6bb45c56fb523eb76b17cc800f28e1531da5278e29bfe6d07de89f3199e47"} Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.005046 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3d6bb45c56fb523eb76b17cc800f28e1531da5278e29bfe6d07de89f3199e47" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.019584 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.051696 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.066377 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.070274 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm6jm\" (UniqueName: \"kubernetes.io/projected/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-kube-api-access-zm6jm\") pod \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\" (UID: \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.070358 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwcxc\" (UniqueName: \"kubernetes.io/projected/f9ee15e8-6695-454f-83ad-d54176458497-kube-api-access-mwcxc\") pod \"f9ee15e8-6695-454f-83ad-d54176458497\" (UID: \"f9ee15e8-6695-454f-83ad-d54176458497\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.070453 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ee15e8-6695-454f-83ad-d54176458497-operator-scripts\") pod \"f9ee15e8-6695-454f-83ad-d54176458497\" (UID: \"f9ee15e8-6695-454f-83ad-d54176458497\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.070497 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-operator-scripts\") pod \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\" (UID: \"4edeb89f-0bd9-466e-a9f9-2d45575d2c72\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.070547 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q65k4\" (UniqueName: \"kubernetes.io/projected/eb354e85-311d-40bb-ae4a-5c535d4d89b9-kube-api-access-q65k4\") pod \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\" (UID: \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.070749 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb354e85-311d-40bb-ae4a-5c535d4d89b9-operator-scripts\") pod \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\" (UID: \"eb354e85-311d-40bb-ae4a-5c535d4d89b9\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.072153 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb354e85-311d-40bb-ae4a-5c535d4d89b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb354e85-311d-40bb-ae4a-5c535d4d89b9" (UID: "eb354e85-311d-40bb-ae4a-5c535d4d89b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.073205 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ee15e8-6695-454f-83ad-d54176458497-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9ee15e8-6695-454f-83ad-d54176458497" (UID: "f9ee15e8-6695-454f-83ad-d54176458497"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.075945 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4edeb89f-0bd9-466e-a9f9-2d45575d2c72" (UID: "4edeb89f-0bd9-466e-a9f9-2d45575d2c72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.079267 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ee15e8-6695-454f-83ad-d54176458497-kube-api-access-mwcxc" (OuterVolumeSpecName: "kube-api-access-mwcxc") pod "f9ee15e8-6695-454f-83ad-d54176458497" (UID: "f9ee15e8-6695-454f-83ad-d54176458497"). InnerVolumeSpecName "kube-api-access-mwcxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.082120 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-kube-api-access-zm6jm" (OuterVolumeSpecName: "kube-api-access-zm6jm") pod "4edeb89f-0bd9-466e-a9f9-2d45575d2c72" (UID: "4edeb89f-0bd9-466e-a9f9-2d45575d2c72"). InnerVolumeSpecName "kube-api-access-zm6jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.090633 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb354e85-311d-40bb-ae4a-5c535d4d89b9-kube-api-access-q65k4" (OuterVolumeSpecName: "kube-api-access-q65k4") pod "eb354e85-311d-40bb-ae4a-5c535d4d89b9" (UID: "eb354e85-311d-40bb-ae4a-5c535d4d89b9"). InnerVolumeSpecName "kube-api-access-q65k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.172484 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm6jm\" (UniqueName: \"kubernetes.io/projected/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-kube-api-access-zm6jm\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.172533 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwcxc\" (UniqueName: \"kubernetes.io/projected/f9ee15e8-6695-454f-83ad-d54176458497-kube-api-access-mwcxc\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.172546 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ee15e8-6695-454f-83ad-d54176458497-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.172559 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edeb89f-0bd9-466e-a9f9-2d45575d2c72-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.172569 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q65k4\" (UniqueName: \"kubernetes.io/projected/eb354e85-311d-40bb-ae4a-5c535d4d89b9-kube-api-access-q65k4\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.172578 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb354e85-311d-40bb-ae4a-5c535d4d89b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.173176 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.182603 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.200585 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.376040 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ee1a0f2-86df-4f97-957a-22bbd7da4505-operator-scripts\") pod \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\" (UID: \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.376657 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7d6c\" (UniqueName: \"kubernetes.io/projected/2ee1a0f2-86df-4f97-957a-22bbd7da4505-kube-api-access-j7d6c\") pod \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\" (UID: \"2ee1a0f2-86df-4f97-957a-22bbd7da4505\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.376762 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqspw\" (UniqueName: \"kubernetes.io/projected/767841a7-db94-430a-b408-10e5bd0350e5-kube-api-access-bqspw\") pod \"767841a7-db94-430a-b408-10e5bd0350e5\" (UID: \"767841a7-db94-430a-b408-10e5bd0350e5\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.376809 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-operator-scripts\") pod \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\" (UID: \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.376874 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/767841a7-db94-430a-b408-10e5bd0350e5-operator-scripts\") pod \"767841a7-db94-430a-b408-10e5bd0350e5\" (UID: \"767841a7-db94-430a-b408-10e5bd0350e5\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.376910 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zfw7\" (UniqueName: \"kubernetes.io/projected/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-kube-api-access-6zfw7\") pod \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\" (UID: \"a1fe36f3-d6b6-44e0-b85b-6def754fd08e\") " Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.379374 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/767841a7-db94-430a-b408-10e5bd0350e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "767841a7-db94-430a-b408-10e5bd0350e5" (UID: "767841a7-db94-430a-b408-10e5bd0350e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.379783 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee1a0f2-86df-4f97-957a-22bbd7da4505-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ee1a0f2-86df-4f97-957a-22bbd7da4505" (UID: "2ee1a0f2-86df-4f97-957a-22bbd7da4505"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.379893 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1fe36f3-d6b6-44e0-b85b-6def754fd08e" (UID: "a1fe36f3-d6b6-44e0-b85b-6def754fd08e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.384152 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767841a7-db94-430a-b408-10e5bd0350e5-kube-api-access-bqspw" (OuterVolumeSpecName: "kube-api-access-bqspw") pod "767841a7-db94-430a-b408-10e5bd0350e5" (UID: "767841a7-db94-430a-b408-10e5bd0350e5"). InnerVolumeSpecName "kube-api-access-bqspw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.384199 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee1a0f2-86df-4f97-957a-22bbd7da4505-kube-api-access-j7d6c" (OuterVolumeSpecName: "kube-api-access-j7d6c") pod "2ee1a0f2-86df-4f97-957a-22bbd7da4505" (UID: "2ee1a0f2-86df-4f97-957a-22bbd7da4505"). InnerVolumeSpecName "kube-api-access-j7d6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.384218 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-kube-api-access-6zfw7" (OuterVolumeSpecName: "kube-api-access-6zfw7") pod "a1fe36f3-d6b6-44e0-b85b-6def754fd08e" (UID: "a1fe36f3-d6b6-44e0-b85b-6def754fd08e"). InnerVolumeSpecName "kube-api-access-6zfw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.481675 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqspw\" (UniqueName: \"kubernetes.io/projected/767841a7-db94-430a-b408-10e5bd0350e5-kube-api-access-bqspw\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.481720 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.481731 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/767841a7-db94-430a-b408-10e5bd0350e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.481741 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zfw7\" (UniqueName: \"kubernetes.io/projected/a1fe36f3-d6b6-44e0-b85b-6def754fd08e-kube-api-access-6zfw7\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.481751 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ee1a0f2-86df-4f97-957a-22bbd7da4505-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:28 crc kubenswrapper[4836]: I0217 14:27:28.481760 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7d6c\" (UniqueName: \"kubernetes.io/projected/2ee1a0f2-86df-4f97-957a-22bbd7da4505-kube-api-access-j7d6c\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.029771 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q25rr" event={"ID":"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f","Type":"ContainerStarted","Data":"515b55d1439f54ad3649999fcf112b0e86238d037ec2170a1978295a22c02429"} Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.038105 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d11-account-create-update-jf72z" Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.039434 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"10e8f1a8271fe8b9a3e079d4e3f9b1c9e1a94071cc3e1794381ca8b55232643b"} Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.040934 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"7f1925fc13f4afdcc4c45288037b0e54b58885bef6bfe0968e5743fdedd3eee5"} Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.040713 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2ea0-account-create-update-p7p99" Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.040778 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-69hk6" Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.040624 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-jjrp2" Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.040837 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w5qdk" Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.040831 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-652c-account-create-update-lswdv" Feb 17 14:27:29 crc kubenswrapper[4836]: I0217 14:27:29.071598 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-q25rr" podStartSLOduration=11.378752471 podStartE2EDuration="20.071535812s" podCreationTimestamp="2026-02-17 14:27:09 +0000 UTC" firstStartedPulling="2026-02-17 14:27:19.094410367 +0000 UTC m=+1265.437338636" lastFinishedPulling="2026-02-17 14:27:27.787193708 +0000 UTC m=+1274.130121977" observedRunningTime="2026-02-17 14:27:29.0544771 +0000 UTC m=+1275.397405369" watchObservedRunningTime="2026-02-17 14:27:29.071535812 +0000 UTC m=+1275.414464081" Feb 17 14:27:32 crc kubenswrapper[4836]: I0217 14:27:32.079628 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8g7x" event={"ID":"df3a6cf1-bca0-45b2-9f7c-6d483452d49d","Type":"ContainerStarted","Data":"2953db160f228060c084b5fd479ec149c2b0acd6cacae4957fb68229d08ae1b9"} Feb 17 14:27:32 crc kubenswrapper[4836]: I0217 14:27:32.092357 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"22a3a4e55efe4e3d464889001a9fe901ffb4ea46c14cdd7e85c9d2c2e6e3edfd"} Feb 17 14:27:32 crc kubenswrapper[4836]: I0217 14:27:32.092438 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"5e6c4cd2951fb0b422fb40c5f945bb3f2e7b7e9db69228f8713f1bc45540baa5"} Feb 17 14:27:32 crc kubenswrapper[4836]: I0217 14:27:32.092459 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"4cb65cd5b122c3dcb9c9f25106640404b4eabc5c79e62548ea4c28fea1377b9a"} Feb 17 14:27:32 crc kubenswrapper[4836]: I0217 14:27:32.106143 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-z8g7x" podStartSLOduration=2.930026148 podStartE2EDuration="41.106107481s" podCreationTimestamp="2026-02-17 14:26:51 +0000 UTC" firstStartedPulling="2026-02-17 14:26:52.865521887 +0000 UTC m=+1239.208450156" lastFinishedPulling="2026-02-17 14:27:31.04160322 +0000 UTC m=+1277.384531489" observedRunningTime="2026-02-17 14:27:32.100490569 +0000 UTC m=+1278.443418858" watchObservedRunningTime="2026-02-17 14:27:32.106107481 +0000 UTC m=+1278.449035760" Feb 17 14:27:33 crc kubenswrapper[4836]: I0217 14:27:33.110042 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"09f154f4fd28f63dbec7ac1035524a84210785f03e39a3ac0cfc39a54b0f40e4"} Feb 17 14:27:34 crc kubenswrapper[4836]: I0217 14:27:34.228709 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"1d67e755bebbdcbc6c4235f60792c46f335a26119c8d75fdd344cb8cbda7ab2e"} Feb 17 14:27:34 crc kubenswrapper[4836]: I0217 14:27:34.229926 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"59ef4eb444efd2a3547796e2e5dbbda7d2ba921912ba9e5ccad7ea3bd4ca8b8c"} Feb 17 14:27:34 crc kubenswrapper[4836]: I0217 14:27:34.231780 4836 generic.go:334] "Generic (PLEG): container finished" podID="6a1d4ef8-03d9-42d8-ae0b-9410767ed25f" containerID="515b55d1439f54ad3649999fcf112b0e86238d037ec2170a1978295a22c02429" exitCode=0 Feb 17 14:27:34 crc kubenswrapper[4836]: I0217 14:27:34.231828 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q25rr" event={"ID":"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f","Type":"ContainerDied","Data":"515b55d1439f54ad3649999fcf112b0e86238d037ec2170a1978295a22c02429"} Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.255871 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"12fae51aea87a0815d125bc2d63bb27751d34a74693a02e2653210bd2a718db7"} Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.256467 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"025091d0945dd7416700daf100e14ba799e1c45450a4fd76dca0971c8617473d"} Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.256500 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"ef3b36336f0744c33a0143bf2d440f444db1cbbcfe714850f5511d030edb053f"} Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.746746 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.863989 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-config-data\") pod \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.864150 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-combined-ca-bundle\") pod \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.864773 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njnhf\" (UniqueName: \"kubernetes.io/projected/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-kube-api-access-njnhf\") pod \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\" (UID: \"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f\") " Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.874683 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-kube-api-access-njnhf" (OuterVolumeSpecName: "kube-api-access-njnhf") pod "6a1d4ef8-03d9-42d8-ae0b-9410767ed25f" (UID: "6a1d4ef8-03d9-42d8-ae0b-9410767ed25f"). InnerVolumeSpecName "kube-api-access-njnhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.895674 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a1d4ef8-03d9-42d8-ae0b-9410767ed25f" (UID: "6a1d4ef8-03d9-42d8-ae0b-9410767ed25f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.930939 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-config-data" (OuterVolumeSpecName: "config-data") pod "6a1d4ef8-03d9-42d8-ae0b-9410767ed25f" (UID: "6a1d4ef8-03d9-42d8-ae0b-9410767ed25f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.967866 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.968391 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njnhf\" (UniqueName: \"kubernetes.io/projected/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-kube-api-access-njnhf\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:35 crc kubenswrapper[4836]: I0217 14:27:35.968407 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.271139 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q25rr" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.271135 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q25rr" event={"ID":"6a1d4ef8-03d9-42d8-ae0b-9410767ed25f","Type":"ContainerDied","Data":"d0ca56828b8b775526488a0852f84f24a4af21e12c68a30c67a55c19e97b65de"} Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.271342 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ca56828b8b775526488a0852f84f24a4af21e12c68a30c67a55c19e97b65de" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.298441 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"400f3b9611221e6f2eab0fbb1342619856c817ae813d92448fcfbd94d6c95a02"} Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.298517 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e482046c-502a-4f41-b013-7b3ef1c71ee1","Type":"ContainerStarted","Data":"f662b0f988bf0c51a41f39bffce76367bf85b3cac50090e090da9029607a1a75"} Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.411126 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=48.463657773 podStartE2EDuration="1m2.411098228s" podCreationTimestamp="2026-02-17 14:26:34 +0000 UTC" firstStartedPulling="2026-02-17 14:27:19.326606863 +0000 UTC m=+1265.669535132" lastFinishedPulling="2026-02-17 14:27:33.274047318 +0000 UTC m=+1279.616975587" observedRunningTime="2026-02-17 14:27:36.353899012 +0000 UTC m=+1282.696827311" watchObservedRunningTime="2026-02-17 14:27:36.411098228 +0000 UTC m=+1282.754026487" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.613946 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r25dh"] Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.614852 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623225aa-2492-494e-be5b-92acef6f23cf" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.614881 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="623225aa-2492-494e-be5b-92acef6f23cf" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.614918 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1d4ef8-03d9-42d8-ae0b-9410767ed25f" containerName="keystone-db-sync" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.614929 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1d4ef8-03d9-42d8-ae0b-9410767ed25f" containerName="keystone-db-sync" Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.614943 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edeb89f-0bd9-466e-a9f9-2d45575d2c72" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.614951 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edeb89f-0bd9-466e-a9f9-2d45575d2c72" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.614983 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb354e85-311d-40bb-ae4a-5c535d4d89b9" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.614990 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb354e85-311d-40bb-ae4a-5c535d4d89b9" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.615011 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fe36f3-d6b6-44e0-b85b-6def754fd08e" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615017 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fe36f3-d6b6-44e0-b85b-6def754fd08e" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.615033 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ce1c7a-57e8-491e-84ab-8aed8baea37b" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615039 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ce1c7a-57e8-491e-84ab-8aed8baea37b" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.615048 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee1a0f2-86df-4f97-957a-22bbd7da4505" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615054 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee1a0f2-86df-4f97-957a-22bbd7da4505" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.615063 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767841a7-db94-430a-b408-10e5bd0350e5" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615070 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="767841a7-db94-430a-b408-10e5bd0350e5" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: E0217 14:27:36.615083 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ee15e8-6695-454f-83ad-d54176458497" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615091 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ee15e8-6695-454f-83ad-d54176458497" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615336 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="623225aa-2492-494e-be5b-92acef6f23cf" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615364 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fe36f3-d6b6-44e0-b85b-6def754fd08e" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615375 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1d4ef8-03d9-42d8-ae0b-9410767ed25f" containerName="keystone-db-sync" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615388 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ce1c7a-57e8-491e-84ab-8aed8baea37b" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615400 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edeb89f-0bd9-466e-a9f9-2d45575d2c72" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615408 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee1a0f2-86df-4f97-957a-22bbd7da4505" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615418 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb354e85-311d-40bb-ae4a-5c535d4d89b9" containerName="mariadb-database-create" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615424 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ee15e8-6695-454f-83ad-d54176458497" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.615435 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="767841a7-db94-430a-b408-10e5bd0350e5" containerName="mariadb-account-create-update" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.617187 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.636968 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.637146 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.637846 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.638222 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.643876 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-w4x5z"] Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.648682 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.676786 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r25dh"] Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.679487 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s87v5" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.688891 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-w4x5z"] Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757058 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-scripts\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757127 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-fernet-keys\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757207 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbt2q\" (UniqueName: \"kubernetes.io/projected/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-kube-api-access-lbt2q\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757268 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-credential-keys\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757353 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7bv\" (UniqueName: \"kubernetes.io/projected/d985347f-7490-475c-a126-182ed65224d4-kube-api-access-fn7bv\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757448 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-config\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757500 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757540 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-combined-ca-bundle\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757601 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757636 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-dns-svc\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.757689 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-config-data\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864136 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-scripts\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864204 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-fernet-keys\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864310 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbt2q\" (UniqueName: \"kubernetes.io/projected/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-kube-api-access-lbt2q\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864402 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-credential-keys\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864466 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7bv\" (UniqueName: \"kubernetes.io/projected/d985347f-7490-475c-a126-182ed65224d4-kube-api-access-fn7bv\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864580 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-config\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864625 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864672 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-combined-ca-bundle\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864736 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864770 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-dns-svc\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.864818 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-config-data\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.867501 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-config\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.868433 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.868853 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.869060 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-dns-svc\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.910601 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-combined-ca-bundle\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.913264 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-scripts\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.922228 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-fernet-keys\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.924559 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-config-data\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.934155 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbt2q\" (UniqueName: \"kubernetes.io/projected/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-kube-api-access-lbt2q\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.934728 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-credential-keys\") pod \"keystone-bootstrap-r25dh\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.962361 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qqwhc"] Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.964199 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.982094 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.982453 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.982611 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cg95t" Feb 17 14:27:36 crc kubenswrapper[4836]: I0217 14:27:36.990458 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7bv\" (UniqueName: \"kubernetes.io/projected/d985347f-7490-475c-a126-182ed65224d4-kube-api-access-fn7bv\") pod \"dnsmasq-dns-f877ddd87-w4x5z\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.027181 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.028194 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.046067 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qqwhc"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.105230 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-scripts\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.105313 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftmbq\" (UniqueName: \"kubernetes.io/projected/8185c649-f1ad-4230-830d-07d002e5b358-kube-api-access-ftmbq\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.117414 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-combined-ca-bundle\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.117776 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-db-sync-config-data\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.117922 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8185c649-f1ad-4230-830d-07d002e5b358-etc-machine-id\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.118078 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-config-data\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.129973 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-pvljf"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.132031 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.159156 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-l28cf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.160157 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.160472 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.160612 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.196581 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-pvljf"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.227783 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-scripts\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.227888 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-scripts\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.227971 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftmbq\" (UniqueName: \"kubernetes.io/projected/8185c649-f1ad-4230-830d-07d002e5b358-kube-api-access-ftmbq\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.228028 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-combined-ca-bundle\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.228134 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-certs\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.228219 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-db-sync-config-data\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.228846 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8185c649-f1ad-4230-830d-07d002e5b358-etc-machine-id\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.229163 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-config-data\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.229226 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-combined-ca-bundle\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.229425 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-config-data\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.229602 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfrn2\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-kube-api-access-hfrn2\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.229907 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8185c649-f1ad-4230-830d-07d002e5b358-etc-machine-id\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.530490 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-combined-ca-bundle\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.551632 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-db-sync-config-data\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.555123 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-certs\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.555355 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-combined-ca-bundle\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.555433 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-config-data\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.555511 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfrn2\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-kube-api-access-hfrn2\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.555598 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-scripts\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.561137 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-scripts\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.569105 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-config-data\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.580195 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-combined-ca-bundle\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.593488 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-certs\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.594131 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-scripts\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.596618 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftmbq\" (UniqueName: \"kubernetes.io/projected/8185c649-f1ad-4230-830d-07d002e5b358-kube-api-access-ftmbq\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.598668 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g9l4s"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.612944 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.619916 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fkh7w" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.634539 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-config-data\") pod \"cinder-db-sync-qqwhc\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.661353 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.721287 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfrn2\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-kube-api-access-hfrn2\") pod \"cloudkitty-db-sync-pvljf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.742274 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g9l4s"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.757912 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.769032 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqtgf\" (UniqueName: \"kubernetes.io/projected/18361bc2-5db1-4611-be18-38593e0b5d5d-kube-api-access-sqtgf\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.769141 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-db-sync-config-data\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.769170 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-combined-ca-bundle\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.776679 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sb6h7"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.778379 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.786442 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.786796 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.787140 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qfhnd" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.796641 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-w4x5z"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.827983 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sb6h7"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.855848 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.861208 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-pdhxs"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.863810 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.874857 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.876089 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7l8w5" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.876284 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.877496 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqtgf\" (UniqueName: \"kubernetes.io/projected/18361bc2-5db1-4611-be18-38593e0b5d5d-kube-api-access-sqtgf\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.877553 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb7hf\" (UniqueName: \"kubernetes.io/projected/81ddbaec-f370-44a3-802b-26980ea65d2f-kube-api-access-rb7hf\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.877633 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-combined-ca-bundle\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.877727 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-db-sync-config-data\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.877747 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-combined-ca-bundle\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.878056 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-config\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.887900 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-db-sync-config-data\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.888166 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-combined-ca-bundle\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.902104 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9qw4t"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.911773 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.915699 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqtgf\" (UniqueName: \"kubernetes.io/projected/18361bc2-5db1-4611-be18-38593e0b5d5d-kube-api-access-sqtgf\") pod \"barbican-db-sync-g9l4s\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.918034 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.940854 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pdhxs"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.989490 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9qw4t"] Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.995771 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-config\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.995873 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k78mf\" (UniqueName: \"kubernetes.io/projected/1fe4b42c-afbf-41e1-8035-5fffb156eadc-kube-api-access-k78mf\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.996019 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-config\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.996283 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-scripts\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.996799 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb7hf\" (UniqueName: \"kubernetes.io/projected/81ddbaec-f370-44a3-802b-26980ea65d2f-kube-api-access-rb7hf\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.996959 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-config-data\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.997083 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.997175 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fe4b42c-afbf-41e1-8035-5fffb156eadc-logs\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.997271 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-combined-ca-bundle\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.997667 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-combined-ca-bundle\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.998838 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.999230 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.999466 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwksm\" (UniqueName: \"kubernetes.io/projected/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-kube-api-access-rwksm\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:37 crc kubenswrapper[4836]: I0217 14:27:37.999612 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-svc\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.001937 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-config\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.011753 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-combined-ca-bundle\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.019128 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.035257 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb7hf\" (UniqueName: \"kubernetes.io/projected/81ddbaec-f370-44a3-802b-26980ea65d2f-kube-api-access-rb7hf\") pod \"neutron-db-sync-sb6h7\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.055110 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.060204 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.060569 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.091570 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.108123 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.113993 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-run-httpd\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127397 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-config-data\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127541 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127621 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fe4b42c-afbf-41e1-8035-5fffb156eadc-logs\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127674 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-config-data\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127733 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127780 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-combined-ca-bundle\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127849 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127929 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127960 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwksm\" (UniqueName: \"kubernetes.io/projected/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-kube-api-access-rwksm\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.127990 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.128025 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-svc\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.128189 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-log-httpd\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.128313 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-config\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.128375 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k78mf\" (UniqueName: \"kubernetes.io/projected/1fe4b42c-afbf-41e1-8035-5fffb156eadc-kube-api-access-k78mf\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.128447 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpknx\" (UniqueName: \"kubernetes.io/projected/2a1d16f5-4710-43b4-805e-315ed73bb24e-kube-api-access-dpknx\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.130033 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.130342 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.146202 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-config-data\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.173279 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fe4b42c-afbf-41e1-8035-5fffb156eadc-logs\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.174143 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.184272 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-svc\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.184361 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.185469 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-config\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.185553 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-scripts\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.185732 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-scripts\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.193567 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwksm\" (UniqueName: \"kubernetes.io/projected/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-kube-api-access-rwksm\") pod \"dnsmasq-dns-5959f8865f-9qw4t\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.204728 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9qw4t"] Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.206335 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.207453 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-scripts\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.210214 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-combined-ca-bundle\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.215594 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k78mf\" (UniqueName: \"kubernetes.io/projected/1fe4b42c-afbf-41e1-8035-5fffb156eadc-kube-api-access-k78mf\") pod \"placement-db-sync-pdhxs\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.285321 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9hcq9"] Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.288601 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.291213 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.291345 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-log-httpd\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.291560 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpknx\" (UniqueName: \"kubernetes.io/projected/2a1d16f5-4710-43b4-805e-315ed73bb24e-kube-api-access-dpknx\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.291685 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-scripts\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.291847 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-run-httpd\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.291915 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-config-data\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.291932 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.294007 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-log-httpd\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.294999 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-run-httpd\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.298030 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.301116 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-scripts\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.311490 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.323279 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-config-data\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.333979 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpknx\" (UniqueName: \"kubernetes.io/projected/2a1d16f5-4710-43b4-805e-315ed73bb24e-kube-api-access-dpknx\") pod \"ceilometer-0\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.343872 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9hcq9"] Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.411841 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.412023 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.412112 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.412453 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.414555 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78t2\" (UniqueName: \"kubernetes.io/projected/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-kube-api-access-n78t2\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.414767 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-config\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.444942 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.508380 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pdhxs" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.658888 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-config\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.659161 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.659455 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.659713 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.659880 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.659907 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78t2\" (UniqueName: \"kubernetes.io/projected/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-kube-api-access-n78t2\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.662487 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.662560 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.663049 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-config\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.663975 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.664277 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.721199 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78t2\" (UniqueName: \"kubernetes.io/projected/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-kube-api-access-n78t2\") pod \"dnsmasq-dns-58dd9ff6bc-9hcq9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.804601 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r25dh" event={"ID":"9b18f8ba-fa1b-4a70-8774-0df51c645ed9","Type":"ContainerStarted","Data":"57caf7dfbbf9619fcde234bc6e52e4ee9643128225ce4df5e2ebf099d43860d3"} Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.804698 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r25dh"] Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.804717 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-w4x5z"] Feb 17 14:27:38 crc kubenswrapper[4836]: I0217 14:27:38.941284 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:39 crc kubenswrapper[4836]: I0217 14:27:39.956494 4836 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.009196 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" event={"ID":"d985347f-7490-475c-a126-182ed65224d4","Type":"ContainerStarted","Data":"9b142894b75620c580a00cf3c274a19998723fde1cfc4c18c89919815fac6fa8"} Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.027049 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qqwhc"] Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.046958 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-pvljf"] Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.048740 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqwhc" event={"ID":"8185c649-f1ad-4230-830d-07d002e5b358","Type":"ContainerStarted","Data":"b3482ed7c18ae58a71068d39ec0f731b2f5c23d1bee2fd95e9d280383de59ee3"} Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.074391 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r25dh" podStartSLOduration=4.074350939 podStartE2EDuration="4.074350939s" podCreationTimestamp="2026-02-17 14:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:27:40.067105744 +0000 UTC m=+1286.410034023" watchObservedRunningTime="2026-02-17 14:27:40.074350939 +0000 UTC m=+1286.417279208" Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.428946 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sb6h7"] Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.449752 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9qw4t"] Feb 17 14:27:40 crc kubenswrapper[4836]: W0217 14:27:40.463117 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9a920b_04d0_41e4_8a9e_3b53f5ab7705.slice/crio-c42b88cfee7c21f45ce13367daa0b57526553b3dda8bb68d81609f13bacecaf3 WatchSource:0}: Error finding container c42b88cfee7c21f45ce13367daa0b57526553b3dda8bb68d81609f13bacecaf3: Status 404 returned error can't find the container with id c42b88cfee7c21f45ce13367daa0b57526553b3dda8bb68d81609f13bacecaf3 Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.493896 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g9l4s"] Feb 17 14:27:40 crc kubenswrapper[4836]: W0217 14:27:40.583995 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18361bc2_5db1_4611_be18_38593e0b5d5d.slice/crio-b92684999081dd7ebe3c5f048ea5d9a568a0e24a28001ce5ab97b2282351bbcb WatchSource:0}: Error finding container b92684999081dd7ebe3c5f048ea5d9a568a0e24a28001ce5ab97b2282351bbcb: Status 404 returned error can't find the container with id b92684999081dd7ebe3c5f048ea5d9a568a0e24a28001ce5ab97b2282351bbcb Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.911394 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9hcq9"] Feb 17 14:27:40 crc kubenswrapper[4836]: I0217 14:27:40.960269 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.004588 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pdhxs"] Feb 17 14:27:41 crc kubenswrapper[4836]: W0217 14:27:41.033799 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccce7d80_ec87_4fb2_a75f_1b5ddc2f4be9.slice/crio-e9f16c54dee6fca57cba69a1f24712669edc03c2f1b74e5ff682993352dbd1af WatchSource:0}: Error finding container e9f16c54dee6fca57cba69a1f24712669edc03c2f1b74e5ff682993352dbd1af: Status 404 returned error can't find the container with id e9f16c54dee6fca57cba69a1f24712669edc03c2f1b74e5ff682993352dbd1af Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.132809 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sb6h7" event={"ID":"81ddbaec-f370-44a3-802b-26980ea65d2f","Type":"ContainerStarted","Data":"a94f2fee60c2cb9701b67002fd76857eaeaf8cc9cdf14886139c9af2827d62a6"} Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.153794 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.165554 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerStarted","Data":"725a655ac601adcaa8185b937f6643704390b16c79c731f2de3ba649c346ef2b"} Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.192714 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-pvljf" event={"ID":"4e016162-2025-44ad-989d-ce71d9f8f9bf","Type":"ContainerStarted","Data":"5256492605b5f72154c618f9880c205b521d09a7d2c8e835b6a6c8642893045e"} Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.206315 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" event={"ID":"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705","Type":"ContainerStarted","Data":"c42b88cfee7c21f45ce13367daa0b57526553b3dda8bb68d81609f13bacecaf3"} Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.243624 4836 generic.go:334] "Generic (PLEG): container finished" podID="d985347f-7490-475c-a126-182ed65224d4" containerID="14423eb209623d815ed52e92ff6318e5e659fcf35e927a649dbd595f58224937" exitCode=0 Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.243804 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" event={"ID":"d985347f-7490-475c-a126-182ed65224d4","Type":"ContainerDied","Data":"14423eb209623d815ed52e92ff6318e5e659fcf35e927a649dbd595f58224937"} Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.268806 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g9l4s" event={"ID":"18361bc2-5db1-4611-be18-38593e0b5d5d","Type":"ContainerStarted","Data":"b92684999081dd7ebe3c5f048ea5d9a568a0e24a28001ce5ab97b2282351bbcb"} Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.271829 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r25dh" event={"ID":"9b18f8ba-fa1b-4a70-8774-0df51c645ed9","Type":"ContainerStarted","Data":"85bf6d2c05b11776e36fd7dffb8368edf8f8e5b125a942780ac6175dd831a159"} Feb 17 14:27:41 crc kubenswrapper[4836]: I0217 14:27:41.298191 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" event={"ID":"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9","Type":"ContainerStarted","Data":"e9f16c54dee6fca57cba69a1f24712669edc03c2f1b74e5ff682993352dbd1af"} Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.369761 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sb6h7" event={"ID":"81ddbaec-f370-44a3-802b-26980ea65d2f","Type":"ContainerStarted","Data":"35ecf820b0414db1c94b077c083568db5d4a957bb9d735db9d4e378b6ebbc861"} Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.387716 4836 generic.go:334] "Generic (PLEG): container finished" podID="6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" containerID="4dc5211e3fe44dc01a9738a9c6be073fa788d5b4e5643aa53e9529d0d5d0943a" exitCode=0 Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.387876 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" event={"ID":"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705","Type":"ContainerDied","Data":"4dc5211e3fe44dc01a9738a9c6be073fa788d5b4e5643aa53e9529d0d5d0943a"} Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.409682 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sb6h7" podStartSLOduration=5.409519045 podStartE2EDuration="5.409519045s" podCreationTimestamp="2026-02-17 14:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:27:42.408467417 +0000 UTC m=+1288.751395696" watchObservedRunningTime="2026-02-17 14:27:42.409519045 +0000 UTC m=+1288.752447314" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.450445 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" event={"ID":"d985347f-7490-475c-a126-182ed65224d4","Type":"ContainerDied","Data":"9b142894b75620c580a00cf3c274a19998723fde1cfc4c18c89919815fac6fa8"} Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.450496 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b142894b75620c580a00cf3c274a19998723fde1cfc4c18c89919815fac6fa8" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.456535 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.466639 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pdhxs" event={"ID":"1fe4b42c-afbf-41e1-8035-5fffb156eadc","Type":"ContainerStarted","Data":"63af66d6a1d8223670f744aeb2ae7fb99f6f7344d8ab31ed483859da53a657a7"} Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.501197 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-config\") pod \"d985347f-7490-475c-a126-182ed65224d4\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.501428 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-nb\") pod \"d985347f-7490-475c-a126-182ed65224d4\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.501468 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-dns-svc\") pod \"d985347f-7490-475c-a126-182ed65224d4\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.501503 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-sb\") pod \"d985347f-7490-475c-a126-182ed65224d4\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.501682 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn7bv\" (UniqueName: \"kubernetes.io/projected/d985347f-7490-475c-a126-182ed65224d4-kube-api-access-fn7bv\") pod \"d985347f-7490-475c-a126-182ed65224d4\" (UID: \"d985347f-7490-475c-a126-182ed65224d4\") " Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.515177 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d985347f-7490-475c-a126-182ed65224d4-kube-api-access-fn7bv" (OuterVolumeSpecName: "kube-api-access-fn7bv") pod "d985347f-7490-475c-a126-182ed65224d4" (UID: "d985347f-7490-475c-a126-182ed65224d4"). InnerVolumeSpecName "kube-api-access-fn7bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.556177 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d985347f-7490-475c-a126-182ed65224d4" (UID: "d985347f-7490-475c-a126-182ed65224d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.556564 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-config" (OuterVolumeSpecName: "config") pod "d985347f-7490-475c-a126-182ed65224d4" (UID: "d985347f-7490-475c-a126-182ed65224d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.600050 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d985347f-7490-475c-a126-182ed65224d4" (UID: "d985347f-7490-475c-a126-182ed65224d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.605952 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn7bv\" (UniqueName: \"kubernetes.io/projected/d985347f-7490-475c-a126-182ed65224d4-kube-api-access-fn7bv\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.605991 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.606020 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.606030 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.626752 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d985347f-7490-475c-a126-182ed65224d4" (UID: "d985347f-7490-475c-a126-182ed65224d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:42 crc kubenswrapper[4836]: I0217 14:27:42.733629 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d985347f-7490-475c-a126-182ed65224d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.357809 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.476325 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-svc\") pod \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.476478 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-nb\") pod \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.476729 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-config\") pod \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.476956 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwksm\" (UniqueName: \"kubernetes.io/projected/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-kube-api-access-rwksm\") pod \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.477053 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-sb\") pod \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.477244 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-swift-storage-0\") pod \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\" (UID: \"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705\") " Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.489272 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-kube-api-access-rwksm" (OuterVolumeSpecName: "kube-api-access-rwksm") pod "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" (UID: "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705"). InnerVolumeSpecName "kube-api-access-rwksm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.515023 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" event={"ID":"6e9a920b-04d0-41e4-8a9e-3b53f5ab7705","Type":"ContainerDied","Data":"c42b88cfee7c21f45ce13367daa0b57526553b3dda8bb68d81609f13bacecaf3"} Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.515114 4836 scope.go:117] "RemoveContainer" containerID="4dc5211e3fe44dc01a9738a9c6be073fa788d5b4e5643aa53e9529d0d5d0943a" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.515525 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-9qw4t" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.525367 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" (UID: "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.535986 4836 generic.go:334] "Generic (PLEG): container finished" podID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerID="99b7d1e9f2cb717570cc4209028495f2ccc23c4beb025f8110935cc03d58feb9" exitCode=0 Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.536126 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-w4x5z" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.536992 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" (UID: "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.537939 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" event={"ID":"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9","Type":"ContainerDied","Data":"99b7d1e9f2cb717570cc4209028495f2ccc23c4beb025f8110935cc03d58feb9"} Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.541385 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" (UID: "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.557782 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-config" (OuterVolumeSpecName: "config") pod "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" (UID: "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.568978 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" (UID: "6e9a920b-04d0-41e4-8a9e-3b53f5ab7705"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.588770 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.589267 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.589313 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.589334 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwksm\" (UniqueName: \"kubernetes.io/projected/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-kube-api-access-rwksm\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.589350 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:43 crc kubenswrapper[4836]: I0217 14:27:43.589377 4836 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:44 crc kubenswrapper[4836]: I0217 14:27:44.005795 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-w4x5z"] Feb 17 14:27:44 crc kubenswrapper[4836]: I0217 14:27:44.020288 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-w4x5z"] Feb 17 14:27:44 crc kubenswrapper[4836]: I0217 14:27:44.050229 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9qw4t"] Feb 17 14:27:44 crc kubenswrapper[4836]: I0217 14:27:44.068262 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9qw4t"] Feb 17 14:27:44 crc kubenswrapper[4836]: I0217 14:27:44.721873 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" path="/var/lib/kubelet/pods/6e9a920b-04d0-41e4-8a9e-3b53f5ab7705/volumes" Feb 17 14:27:44 crc kubenswrapper[4836]: I0217 14:27:44.722765 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d985347f-7490-475c-a126-182ed65224d4" path="/var/lib/kubelet/pods/d985347f-7490-475c-a126-182ed65224d4/volumes" Feb 17 14:27:45 crc kubenswrapper[4836]: I0217 14:27:45.715167 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" event={"ID":"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9","Type":"ContainerStarted","Data":"2fca778edd45bdfb866af7aaa0fc6f307d910a96cf1cd5ecfab2d14db35f72e8"} Feb 17 14:27:45 crc kubenswrapper[4836]: I0217 14:27:45.715808 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:45 crc kubenswrapper[4836]: I0217 14:27:45.793414 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" podStartSLOduration=8.793384334 podStartE2EDuration="8.793384334s" podCreationTimestamp="2026-02-17 14:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:27:45.751373578 +0000 UTC m=+1292.094301877" watchObservedRunningTime="2026-02-17 14:27:45.793384334 +0000 UTC m=+1292.136312613" Feb 17 14:27:46 crc kubenswrapper[4836]: I0217 14:27:46.738253 4836 generic.go:334] "Generic (PLEG): container finished" podID="9b18f8ba-fa1b-4a70-8774-0df51c645ed9" containerID="85bf6d2c05b11776e36fd7dffb8368edf8f8e5b125a942780ac6175dd831a159" exitCode=0 Feb 17 14:27:46 crc kubenswrapper[4836]: I0217 14:27:46.739608 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r25dh" event={"ID":"9b18f8ba-fa1b-4a70-8774-0df51c645ed9","Type":"ContainerDied","Data":"85bf6d2c05b11776e36fd7dffb8368edf8f8e5b125a942780ac6175dd831a159"} Feb 17 14:27:48 crc kubenswrapper[4836]: I0217 14:27:48.776107 4836 generic.go:334] "Generic (PLEG): container finished" podID="df3a6cf1-bca0-45b2-9f7c-6d483452d49d" containerID="2953db160f228060c084b5fd479ec149c2b0acd6cacae4957fb68229d08ae1b9" exitCode=0 Feb 17 14:27:48 crc kubenswrapper[4836]: I0217 14:27:48.776274 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8g7x" event={"ID":"df3a6cf1-bca0-45b2-9f7c-6d483452d49d","Type":"ContainerDied","Data":"2953db160f228060c084b5fd479ec149c2b0acd6cacae4957fb68229d08ae1b9"} Feb 17 14:27:53 crc kubenswrapper[4836]: I0217 14:27:53.944613 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.025995 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wbh2w"] Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.026818 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-wbh2w" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="dnsmasq-dns" containerID="cri-o://de75bc86bd0570fcef07a3f3195cfec352721b59eef66e22b061ebca87ca6456" gracePeriod=10 Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.698491 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.715833 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8g7x" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.751256 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wbh2w" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.815862 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-scripts\") pod \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.817163 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-credential-keys\") pod \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.817222 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-fernet-keys\") pod \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.817265 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grffb\" (UniqueName: \"kubernetes.io/projected/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-kube-api-access-grffb\") pod \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.817318 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbt2q\" (UniqueName: \"kubernetes.io/projected/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-kube-api-access-lbt2q\") pod \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.817364 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-combined-ca-bundle\") pod \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.817476 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-config-data\") pod \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\" (UID: \"9b18f8ba-fa1b-4a70-8774-0df51c645ed9\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.817499 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-db-sync-config-data\") pod \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.818540 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-config-data\") pod \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.825290 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9b18f8ba-fa1b-4a70-8774-0df51c645ed9" (UID: "9b18f8ba-fa1b-4a70-8774-0df51c645ed9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.826621 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9b18f8ba-fa1b-4a70-8774-0df51c645ed9" (UID: "9b18f8ba-fa1b-4a70-8774-0df51c645ed9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.826828 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-kube-api-access-lbt2q" (OuterVolumeSpecName: "kube-api-access-lbt2q") pod "9b18f8ba-fa1b-4a70-8774-0df51c645ed9" (UID: "9b18f8ba-fa1b-4a70-8774-0df51c645ed9"). InnerVolumeSpecName "kube-api-access-lbt2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.826959 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-kube-api-access-grffb" (OuterVolumeSpecName: "kube-api-access-grffb") pod "df3a6cf1-bca0-45b2-9f7c-6d483452d49d" (UID: "df3a6cf1-bca0-45b2-9f7c-6d483452d49d"). InnerVolumeSpecName "kube-api-access-grffb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.827178 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-scripts" (OuterVolumeSpecName: "scripts") pod "9b18f8ba-fa1b-4a70-8774-0df51c645ed9" (UID: "9b18f8ba-fa1b-4a70-8774-0df51c645ed9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.831096 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df3a6cf1-bca0-45b2-9f7c-6d483452d49d" (UID: "df3a6cf1-bca0-45b2-9f7c-6d483452d49d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.859220 4836 generic.go:334] "Generic (PLEG): container finished" podID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerID="de75bc86bd0570fcef07a3f3195cfec352721b59eef66e22b061ebca87ca6456" exitCode=0 Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.859343 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wbh2w" event={"ID":"312259c2-4f8f-401d-a19e-64d0bc7dd35f","Type":"ContainerDied","Data":"de75bc86bd0570fcef07a3f3195cfec352721b59eef66e22b061ebca87ca6456"} Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.862023 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r25dh" event={"ID":"9b18f8ba-fa1b-4a70-8774-0df51c645ed9","Type":"ContainerDied","Data":"57caf7dfbbf9619fcde234bc6e52e4ee9643128225ce4df5e2ebf099d43860d3"} Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.862093 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57caf7dfbbf9619fcde234bc6e52e4ee9643128225ce4df5e2ebf099d43860d3" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.862673 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r25dh" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.864106 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-config-data" (OuterVolumeSpecName: "config-data") pod "9b18f8ba-fa1b-4a70-8774-0df51c645ed9" (UID: "9b18f8ba-fa1b-4a70-8774-0df51c645ed9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.866225 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8g7x" event={"ID":"df3a6cf1-bca0-45b2-9f7c-6d483452d49d","Type":"ContainerDied","Data":"a970e805deb8fc7e4ea80574fe0f4020e8d303f5c75ae4049947b41814dd24fc"} Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.866254 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a970e805deb8fc7e4ea80574fe0f4020e8d303f5c75ae4049947b41814dd24fc" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.866373 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8g7x" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.866586 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b18f8ba-fa1b-4a70-8774-0df51c645ed9" (UID: "9b18f8ba-fa1b-4a70-8774-0df51c645ed9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.884793 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-config-data" (OuterVolumeSpecName: "config-data") pod "df3a6cf1-bca0-45b2-9f7c-6d483452d49d" (UID: "df3a6cf1-bca0-45b2-9f7c-6d483452d49d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.920248 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-combined-ca-bundle\") pod \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\" (UID: \"df3a6cf1-bca0-45b2-9f7c-6d483452d49d\") " Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921328 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921356 4836 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921367 4836 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921385 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grffb\" (UniqueName: \"kubernetes.io/projected/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-kube-api-access-grffb\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921396 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbt2q\" (UniqueName: \"kubernetes.io/projected/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-kube-api-access-lbt2q\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921406 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921415 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b18f8ba-fa1b-4a70-8774-0df51c645ed9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921425 4836 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.921434 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:54 crc kubenswrapper[4836]: I0217 14:27:54.952207 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df3a6cf1-bca0-45b2-9f7c-6d483452d49d" (UID: "df3a6cf1-bca0-45b2-9f7c-6d483452d49d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.023457 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3a6cf1-bca0-45b2-9f7c-6d483452d49d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.832446 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r25dh"] Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.855494 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r25dh"] Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.920707 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vmgps"] Feb 17 14:27:55 crc kubenswrapper[4836]: E0217 14:27:55.921279 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3a6cf1-bca0-45b2-9f7c-6d483452d49d" containerName="glance-db-sync" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.921385 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3a6cf1-bca0-45b2-9f7c-6d483452d49d" containerName="glance-db-sync" Feb 17 14:27:55 crc kubenswrapper[4836]: E0217 14:27:55.921411 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985347f-7490-475c-a126-182ed65224d4" containerName="init" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.921417 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985347f-7490-475c-a126-182ed65224d4" containerName="init" Feb 17 14:27:55 crc kubenswrapper[4836]: E0217 14:27:55.921427 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b18f8ba-fa1b-4a70-8774-0df51c645ed9" containerName="keystone-bootstrap" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.921436 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b18f8ba-fa1b-4a70-8774-0df51c645ed9" containerName="keystone-bootstrap" Feb 17 14:27:55 crc kubenswrapper[4836]: E0217 14:27:55.921457 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" containerName="init" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.921464 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" containerName="init" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.921706 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3a6cf1-bca0-45b2-9f7c-6d483452d49d" containerName="glance-db-sync" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.921727 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985347f-7490-475c-a126-182ed65224d4" containerName="init" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.921746 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b18f8ba-fa1b-4a70-8774-0df51c645ed9" containerName="keystone-bootstrap" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.921758 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9a920b-04d0-41e4-8a9e-3b53f5ab7705" containerName="init" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.923363 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.928515 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.928827 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.929153 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.933019 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s87v5" Feb 17 14:27:55 crc kubenswrapper[4836]: I0217 14:27:55.953551 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vmgps"] Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.044923 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-config-data\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.045013 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-combined-ca-bundle\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.045062 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-credential-keys\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.045102 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-scripts\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.045156 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-fernet-keys\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.045255 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fflgh\" (UniqueName: \"kubernetes.io/projected/10331926-261d-4e44-a8c2-89846903ca12-kube-api-access-fflgh\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.150332 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fflgh\" (UniqueName: \"kubernetes.io/projected/10331926-261d-4e44-a8c2-89846903ca12-kube-api-access-fflgh\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.150462 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-config-data\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.150500 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-combined-ca-bundle\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.150533 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-credential-keys\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.150567 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-scripts\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.150616 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-fernet-keys\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.166791 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-credential-keys\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.167550 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-combined-ca-bundle\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.173360 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-scripts\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.173924 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-config-data\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.174396 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-fernet-keys\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.185832 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fflgh\" (UniqueName: \"kubernetes.io/projected/10331926-261d-4e44-a8c2-89846903ca12-kube-api-access-fflgh\") pod \"keystone-bootstrap-vmgps\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.251377 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mpdz8"] Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.253332 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.261762 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.293160 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mpdz8"] Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.356863 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-config\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.356938 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.356977 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfb6n\" (UniqueName: \"kubernetes.io/projected/522206b4-5f50-46e4-a363-24021bd65471-kube-api-access-hfb6n\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.357084 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.357101 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.357125 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.460852 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.460911 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.460954 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.461034 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-config\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.461086 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.461135 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfb6n\" (UniqueName: \"kubernetes.io/projected/522206b4-5f50-46e4-a363-24021bd65471-kube-api-access-hfb6n\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.462084 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.462184 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.462394 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.462806 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-config\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.463461 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.489079 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfb6n\" (UniqueName: \"kubernetes.io/projected/522206b4-5f50-46e4-a363-24021bd65471-kube-api-access-hfb6n\") pod \"dnsmasq-dns-785d8bcb8c-mpdz8\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.582233 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b18f8ba-fa1b-4a70-8774-0df51c645ed9" path="/var/lib/kubelet/pods/9b18f8ba-fa1b-4a70-8774-0df51c645ed9/volumes" Feb 17 14:27:56 crc kubenswrapper[4836]: I0217 14:27:56.609092 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.236598 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.239222 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.247164 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.247372 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qbbvn" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.251068 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.261376 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.384896 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.385150 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.385518 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-logs\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.385863 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.385963 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqt5t\" (UniqueName: \"kubernetes.io/projected/c3515b05-3f55-44fc-9578-ee0d73cb7382-kube-api-access-jqt5t\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.386004 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.386082 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.397367 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.399308 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.402267 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.416472 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.488635 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.488726 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-logs\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.488800 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.488828 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqt5t\" (UniqueName: \"kubernetes.io/projected/c3515b05-3f55-44fc-9578-ee0d73cb7382-kube-api-access-jqt5t\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.488850 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.488874 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.488917 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.493330 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-logs\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.493357 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.496118 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.505418 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.508150 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.508202 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1c05c143b5a67726d067625f4c5da25dac4624853da03b1088e3ef561519b77/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.510433 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqt5t\" (UniqueName: \"kubernetes.io/projected/c3515b05-3f55-44fc-9578-ee0d73cb7382-kube-api-access-jqt5t\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.538220 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.567847 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " pod="openstack/glance-default-external-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.591044 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-logs\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.591199 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.591278 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.591340 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njnqq\" (UniqueName: \"kubernetes.io/projected/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-kube-api-access-njnqq\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.591376 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.591591 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.591702 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.712427 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-logs\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.712703 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.712960 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.712999 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njnqq\" (UniqueName: \"kubernetes.io/projected/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-kube-api-access-njnqq\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.713024 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.713376 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.713439 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.714050 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-logs\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.714851 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.719755 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.719859 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.719901 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20e9fd566d593755c515c6f55c386051b7cebe94721b27d85313d87ab22fcec4/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.721033 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.743900 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.785002 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.797774 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njnqq\" (UniqueName: \"kubernetes.io/projected/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-kube-api-access-njnqq\") pod \"glance-default-internal-api-0\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.808308 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:27:57 crc kubenswrapper[4836]: I0217 14:27:57.868274 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:27:59 crc kubenswrapper[4836]: I0217 14:27:59.418059 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:27:59 crc kubenswrapper[4836]: I0217 14:27:59.514331 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:27:59 crc kubenswrapper[4836]: I0217 14:27:59.752466 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wbh2w" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 17 14:27:59 crc kubenswrapper[4836]: I0217 14:27:59.953677 4836 generic.go:334] "Generic (PLEG): container finished" podID="6fec8667-7189-4e29-8362-37dd935d2db7" containerID="a82e37c7eb14ee548654e466a1de02d0ef7f18f1bf7fd37d772effc7cc961f91" exitCode=0 Feb 17 14:27:59 crc kubenswrapper[4836]: I0217 14:27:59.953749 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6fec8667-7189-4e29-8362-37dd935d2db7","Type":"ContainerDied","Data":"a82e37c7eb14ee548654e466a1de02d0ef7f18f1bf7fd37d772effc7cc961f91"} Feb 17 14:28:01 crc kubenswrapper[4836]: E0217 14:28:01.627071 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 14:28:01 crc kubenswrapper[4836]: E0217 14:28:01.628091 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqtgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g9l4s_openstack(18361bc2-5db1-4611-be18-38593e0b5d5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:28:01 crc kubenswrapper[4836]: E0217 14:28:01.629384 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g9l4s" podUID="18361bc2-5db1-4611-be18-38593e0b5d5d" Feb 17 14:28:01 crc kubenswrapper[4836]: E0217 14:28:01.982453 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g9l4s" podUID="18361bc2-5db1-4611-be18-38593e0b5d5d" Feb 17 14:28:04 crc kubenswrapper[4836]: I0217 14:28:04.747895 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wbh2w" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 17 14:28:04 crc kubenswrapper[4836]: I0217 14:28:04.748814 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:28:05 crc kubenswrapper[4836]: I0217 14:28:05.011955 4836 generic.go:334] "Generic (PLEG): container finished" podID="81ddbaec-f370-44a3-802b-26980ea65d2f" containerID="35ecf820b0414db1c94b077c083568db5d4a957bb9d735db9d4e378b6ebbc861" exitCode=0 Feb 17 14:28:05 crc kubenswrapper[4836]: I0217 14:28:05.012026 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sb6h7" event={"ID":"81ddbaec-f370-44a3-802b-26980ea65d2f","Type":"ContainerDied","Data":"35ecf820b0414db1c94b077c083568db5d4a957bb9d735db9d4e378b6ebbc861"} Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.534248 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.607844 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb7hf\" (UniqueName: \"kubernetes.io/projected/81ddbaec-f370-44a3-802b-26980ea65d2f-kube-api-access-rb7hf\") pod \"81ddbaec-f370-44a3-802b-26980ea65d2f\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.608178 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-combined-ca-bundle\") pod \"81ddbaec-f370-44a3-802b-26980ea65d2f\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.608240 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-config\") pod \"81ddbaec-f370-44a3-802b-26980ea65d2f\" (UID: \"81ddbaec-f370-44a3-802b-26980ea65d2f\") " Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.617663 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ddbaec-f370-44a3-802b-26980ea65d2f-kube-api-access-rb7hf" (OuterVolumeSpecName: "kube-api-access-rb7hf") pod "81ddbaec-f370-44a3-802b-26980ea65d2f" (UID: "81ddbaec-f370-44a3-802b-26980ea65d2f"). InnerVolumeSpecName "kube-api-access-rb7hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.865235 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-config" (OuterVolumeSpecName: "config") pod "81ddbaec-f370-44a3-802b-26980ea65d2f" (UID: "81ddbaec-f370-44a3-802b-26980ea65d2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.871626 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb7hf\" (UniqueName: \"kubernetes.io/projected/81ddbaec-f370-44a3-802b-26980ea65d2f-kube-api-access-rb7hf\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.871658 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.875989 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ddbaec-f370-44a3-802b-26980ea65d2f" (UID: "81ddbaec-f370-44a3-802b-26980ea65d2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:12 crc kubenswrapper[4836]: I0217 14:28:12.974351 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ddbaec-f370-44a3-802b-26980ea65d2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.107054 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sb6h7" event={"ID":"81ddbaec-f370-44a3-802b-26980ea65d2f","Type":"ContainerDied","Data":"a94f2fee60c2cb9701b67002fd76857eaeaf8cc9cdf14886139c9af2827d62a6"} Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.107109 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a94f2fee60c2cb9701b67002fd76857eaeaf8cc9cdf14886139c9af2827d62a6" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.107206 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sb6h7" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.940460 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mpdz8"] Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.958393 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56bdc657f6-lhdd4"] Feb 17 14:28:13 crc kubenswrapper[4836]: E0217 14:28:13.959090 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ddbaec-f370-44a3-802b-26980ea65d2f" containerName="neutron-db-sync" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.959120 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ddbaec-f370-44a3-802b-26980ea65d2f" containerName="neutron-db-sync" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.959416 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ddbaec-f370-44a3-802b-26980ea65d2f" containerName="neutron-db-sync" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.960805 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.965831 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.966352 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.972595 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.972682 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qfhnd" Feb 17 14:28:13 crc kubenswrapper[4836]: I0217 14:28:13.981414 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56bdc657f6-lhdd4"] Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:13.999390 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-config\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:13.999460 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-combined-ca-bundle\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:13.999586 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-httpd-config\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:13.999620 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-ovndb-tls-certs\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:13.999648 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzb4\" (UniqueName: \"kubernetes.io/projected/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-kube-api-access-whzb4\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: E0217 14:28:14.048430 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 14:28:14 crc kubenswrapper[4836]: E0217 14:28:14.048681 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftmbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qqwhc_openstack(8185c649-f1ad-4230-830d-07d002e5b358): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:28:14 crc kubenswrapper[4836]: E0217 14:28:14.051155 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qqwhc" podUID="8185c649-f1ad-4230-830d-07d002e5b358" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.082514 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-knj6m"] Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.094523 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-knj6m"] Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.094711 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105439 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-config\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105541 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-combined-ca-bundle\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105589 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7h58\" (UniqueName: \"kubernetes.io/projected/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-kube-api-access-f7h58\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105639 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105669 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-svc\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105698 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105781 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105830 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-httpd-config\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105872 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-ovndb-tls-certs\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105904 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whzb4\" (UniqueName: \"kubernetes.io/projected/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-kube-api-access-whzb4\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.105966 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-config\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.121827 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-config\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.128163 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-combined-ca-bundle\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.128521 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-httpd-config\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.155287 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whzb4\" (UniqueName: \"kubernetes.io/projected/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-kube-api-access-whzb4\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.157872 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-ovndb-tls-certs\") pod \"neutron-56bdc657f6-lhdd4\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.166857 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wbh2w" event={"ID":"312259c2-4f8f-401d-a19e-64d0bc7dd35f","Type":"ContainerDied","Data":"c6f4101d16fd86bcceb0625244616ff16d1c5665adecebcc6d46b7d7f983a200"} Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.166922 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f4101d16fd86bcceb0625244616ff16d1c5665adecebcc6d46b7d7f983a200" Feb 17 14:28:14 crc kubenswrapper[4836]: E0217 14:28:14.168622 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qqwhc" podUID="8185c649-f1ad-4230-830d-07d002e5b358" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.211241 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-config\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.212650 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-config\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.215353 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7h58\" (UniqueName: \"kubernetes.io/projected/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-kube-api-access-f7h58\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.215464 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.215497 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-svc\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.215516 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.215688 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.217156 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.217316 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-svc\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.217918 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.218411 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.235920 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7h58\" (UniqueName: \"kubernetes.io/projected/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-kube-api-access-f7h58\") pod \"dnsmasq-dns-55f844cf75-knj6m\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.311213 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.314366 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.420274 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-config\") pod \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.420378 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-nb\") pod \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.420661 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76vb5\" (UniqueName: \"kubernetes.io/projected/312259c2-4f8f-401d-a19e-64d0bc7dd35f-kube-api-access-76vb5\") pod \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.420903 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-dns-svc\") pod \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.420963 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-sb\") pod \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\" (UID: \"312259c2-4f8f-401d-a19e-64d0bc7dd35f\") " Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.421043 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.425989 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312259c2-4f8f-401d-a19e-64d0bc7dd35f-kube-api-access-76vb5" (OuterVolumeSpecName: "kube-api-access-76vb5") pod "312259c2-4f8f-401d-a19e-64d0bc7dd35f" (UID: "312259c2-4f8f-401d-a19e-64d0bc7dd35f"). InnerVolumeSpecName "kube-api-access-76vb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.477657 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "312259c2-4f8f-401d-a19e-64d0bc7dd35f" (UID: "312259c2-4f8f-401d-a19e-64d0bc7dd35f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.480087 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "312259c2-4f8f-401d-a19e-64d0bc7dd35f" (UID: "312259c2-4f8f-401d-a19e-64d0bc7dd35f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.493633 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "312259c2-4f8f-401d-a19e-64d0bc7dd35f" (UID: "312259c2-4f8f-401d-a19e-64d0bc7dd35f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.493907 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-config" (OuterVolumeSpecName: "config") pod "312259c2-4f8f-401d-a19e-64d0bc7dd35f" (UID: "312259c2-4f8f-401d-a19e-64d0bc7dd35f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.524700 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76vb5\" (UniqueName: \"kubernetes.io/projected/312259c2-4f8f-401d-a19e-64d0bc7dd35f-kube-api-access-76vb5\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.524735 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.524745 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.524754 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.524762 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312259c2-4f8f-401d-a19e-64d0bc7dd35f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:14 crc kubenswrapper[4836]: I0217 14:28:14.747714 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wbh2w" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Feb 17 14:28:15 crc kubenswrapper[4836]: I0217 14:28:15.183693 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wbh2w" Feb 17 14:28:15 crc kubenswrapper[4836]: I0217 14:28:15.217980 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wbh2w"] Feb 17 14:28:15 crc kubenswrapper[4836]: I0217 14:28:15.230880 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wbh2w"] Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.475727 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bc789578f-mcrrx"] Feb 17 14:28:16 crc kubenswrapper[4836]: E0217 14:28:16.480015 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="dnsmasq-dns" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.480051 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="dnsmasq-dns" Feb 17 14:28:16 crc kubenswrapper[4836]: E0217 14:28:16.480089 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="init" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.480098 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="init" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.480355 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" containerName="dnsmasq-dns" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.482647 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.488373 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.489214 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.496208 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc789578f-mcrrx"] Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.585214 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312259c2-4f8f-401d-a19e-64d0bc7dd35f" path="/var/lib/kubelet/pods/312259c2-4f8f-401d-a19e-64d0bc7dd35f/volumes" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.591379 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-internal-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.591505 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-combined-ca-bundle\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.591551 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-ovndb-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.591707 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-httpd-config\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.591769 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-config\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.592665 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-public-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.592698 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv8ws\" (UniqueName: \"kubernetes.io/projected/a7dc98d2-302d-4633-8123-fe76bb7dbd40-kube-api-access-nv8ws\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.695211 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-public-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.695286 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv8ws\" (UniqueName: \"kubernetes.io/projected/a7dc98d2-302d-4633-8123-fe76bb7dbd40-kube-api-access-nv8ws\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.695361 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-internal-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.695413 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-combined-ca-bundle\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.695441 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-ovndb-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.695495 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-httpd-config\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.695521 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-config\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.703756 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-internal-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.704027 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-config\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.704213 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-combined-ca-bundle\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.712396 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-ovndb-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.713270 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-public-tls-certs\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.714016 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-httpd-config\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.718274 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv8ws\" (UniqueName: \"kubernetes.io/projected/a7dc98d2-302d-4633-8123-fe76bb7dbd40-kube-api-access-nv8ws\") pod \"neutron-bc789578f-mcrrx\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:16 crc kubenswrapper[4836]: I0217 14:28:16.810955 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:21 crc kubenswrapper[4836]: I0217 14:28:21.883668 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vmgps"] Feb 17 14:28:21 crc kubenswrapper[4836]: I0217 14:28:21.906841 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mpdz8"] Feb 17 14:28:22 crc kubenswrapper[4836]: I0217 14:28:22.004510 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:28:22 crc kubenswrapper[4836]: I0217 14:28:22.090745 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:28:22 crc kubenswrapper[4836]: I0217 14:28:22.225717 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc789578f-mcrrx"] Feb 17 14:28:22 crc kubenswrapper[4836]: E0217 14:28:22.345135 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 17 14:28:22 crc kubenswrapper[4836]: E0217 14:28:22.345206 4836 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 17 14:28:22 crc kubenswrapper[4836]: E0217 14:28:22.345467 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hfrn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-pvljf_openstack(4e016162-2025-44ad-989d-ce71d9f8f9bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:28:22 crc kubenswrapper[4836]: E0217 14:28:22.346759 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-pvljf" podUID="4e016162-2025-44ad-989d-ce71d9f8f9bf" Feb 17 14:28:22 crc kubenswrapper[4836]: I0217 14:28:22.378990 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6","Type":"ContainerStarted","Data":"1bdfb1c3c1f902411f2380ceedd49dc7958dbb9a04d7b4060c81c560dbbd7e40"} Feb 17 14:28:22 crc kubenswrapper[4836]: I0217 14:28:22.388461 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmgps" event={"ID":"10331926-261d-4e44-a8c2-89846903ca12","Type":"ContainerStarted","Data":"02c724d11382ee98b69e6abaefd40a4e20c1b972def951d53202ec6f8b2b38f2"} Feb 17 14:28:22 crc kubenswrapper[4836]: I0217 14:28:22.396613 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6fec8667-7189-4e29-8362-37dd935d2db7","Type":"ContainerStarted","Data":"9a61c69c04f2f3985c0e460f54a13203f50cbdd884dd38fc58f9989d463b2202"} Feb 17 14:28:22 crc kubenswrapper[4836]: I0217 14:28:22.399831 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" event={"ID":"522206b4-5f50-46e4-a363-24021bd65471","Type":"ContainerStarted","Data":"fee297ba365480160ebc2531b71af26f97646ac6816136e490dca68ef994f4eb"} Feb 17 14:28:22 crc kubenswrapper[4836]: I0217 14:28:22.435148 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc789578f-mcrrx" event={"ID":"a7dc98d2-302d-4633-8123-fe76bb7dbd40","Type":"ContainerStarted","Data":"61e1414473aaed7533a1bb0fd531409b1cf0fa9ea0b92c1ed51519923f9cbabf"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.020529 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-knj6m"] Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.047193 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56bdc657f6-lhdd4"] Feb 17 14:28:23 crc kubenswrapper[4836]: W0217 14:28:23.082225 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7622952e_3f9a_4569_8f4d_8a07f1cbcd2c.slice/crio-8555cc4b8a651ad8d38601eead66d8910d6d4cd8c7c50d4ab726898662d8c02f WatchSource:0}: Error finding container 8555cc4b8a651ad8d38601eead66d8910d6d4cd8c7c50d4ab726898662d8c02f: Status 404 returned error can't find the container with id 8555cc4b8a651ad8d38601eead66d8910d6d4cd8c7c50d4ab726898662d8c02f Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.455851 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bdc657f6-lhdd4" event={"ID":"10f74a60-5438-45cd-a8e1-74ccc1c3b16a","Type":"ContainerStarted","Data":"3131621aad6bddf8f2539d514b9526e7c3c20a9b86076d983784e09cb9285473"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.458783 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pdhxs" event={"ID":"1fe4b42c-afbf-41e1-8035-5fffb156eadc","Type":"ContainerStarted","Data":"705f230fd2d44c1059294c17cc5410cef58dcabc1573c4e7f4f531d00aad46ec"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.472784 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc789578f-mcrrx" event={"ID":"a7dc98d2-302d-4633-8123-fe76bb7dbd40","Type":"ContainerStarted","Data":"29474b05f933bb7261368e691fe6f6124baae6cdcaac7f0997ad485f3fcff20d"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.472848 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc789578f-mcrrx" event={"ID":"a7dc98d2-302d-4633-8123-fe76bb7dbd40","Type":"ContainerStarted","Data":"ef329f1c472e28115c477d0f824ce0452609f500341f5fe161170bb1b7dd1f36"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.473350 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.487639 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerStarted","Data":"6102d176b1010bbf234d415140cba35d28570c5b514c7edd1c4a0962a14c5149"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.488510 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-pdhxs" podStartSLOduration=13.687779247 podStartE2EDuration="46.488494317s" podCreationTimestamp="2026-02-17 14:27:37 +0000 UTC" firstStartedPulling="2026-02-17 14:27:41.191924665 +0000 UTC m=+1287.534852924" lastFinishedPulling="2026-02-17 14:28:13.992639725 +0000 UTC m=+1320.335567994" observedRunningTime="2026-02-17 14:28:23.484588701 +0000 UTC m=+1329.827516970" watchObservedRunningTime="2026-02-17 14:28:23.488494317 +0000 UTC m=+1329.831422576" Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.502779 4836 generic.go:334] "Generic (PLEG): container finished" podID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerID="50d4a249bcc48e57b448052c5a0747dd07cf392d7bd62132728c04243ac9a69b" exitCode=0 Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.502990 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" event={"ID":"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c","Type":"ContainerDied","Data":"50d4a249bcc48e57b448052c5a0747dd07cf392d7bd62132728c04243ac9a69b"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.503040 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" event={"ID":"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c","Type":"ContainerStarted","Data":"8555cc4b8a651ad8d38601eead66d8910d6d4cd8c7c50d4ab726898662d8c02f"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.506860 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3515b05-3f55-44fc-9578-ee0d73cb7382","Type":"ContainerStarted","Data":"1c06a664d1e654a6bd86f236b173b45cd29b036aab35504b6bdf0b6a6af440cb"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.525317 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bc789578f-mcrrx" podStartSLOduration=7.5252577259999995 podStartE2EDuration="7.525257726s" podCreationTimestamp="2026-02-17 14:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:23.520935439 +0000 UTC m=+1329.863863718" watchObservedRunningTime="2026-02-17 14:28:23.525257726 +0000 UTC m=+1329.868185995" Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.526136 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmgps" event={"ID":"10331926-261d-4e44-a8c2-89846903ca12","Type":"ContainerStarted","Data":"0a4b8ba8b2087b1a38486d6f6172aee2da2f8fb8e22feee2e93bb22306b6558e"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.536036 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g9l4s" event={"ID":"18361bc2-5db1-4611-be18-38593e0b5d5d","Type":"ContainerStarted","Data":"13ef4f24a42269dbbf22aa927159da757007caa607e5236e1441cff6b685fe12"} Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.548900 4836 generic.go:334] "Generic (PLEG): container finished" podID="522206b4-5f50-46e4-a363-24021bd65471" containerID="96b7929b0efa57ddeb597b383fe5cd1d57bdd64461f3a7e920a20b1f3f965d47" exitCode=0 Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.550402 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" event={"ID":"522206b4-5f50-46e4-a363-24021bd65471","Type":"ContainerDied","Data":"96b7929b0efa57ddeb597b383fe5cd1d57bdd64461f3a7e920a20b1f3f965d47"} Feb 17 14:28:23 crc kubenswrapper[4836]: E0217 14:28:23.570240 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-pvljf" podUID="4e016162-2025-44ad-989d-ce71d9f8f9bf" Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.666560 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g9l4s" podStartSLOduration=5.892190719 podStartE2EDuration="47.666526578s" podCreationTimestamp="2026-02-17 14:27:36 +0000 UTC" firstStartedPulling="2026-02-17 14:27:40.605470404 +0000 UTC m=+1286.948398673" lastFinishedPulling="2026-02-17 14:28:22.379806263 +0000 UTC m=+1328.722734532" observedRunningTime="2026-02-17 14:28:23.612722245 +0000 UTC m=+1329.955650524" watchObservedRunningTime="2026-02-17 14:28:23.666526578 +0000 UTC m=+1330.009454867" Feb 17 14:28:23 crc kubenswrapper[4836]: I0217 14:28:23.823190 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vmgps" podStartSLOduration=28.823158417 podStartE2EDuration="28.823158417s" podCreationTimestamp="2026-02-17 14:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:23.68794678 +0000 UTC m=+1330.030875049" watchObservedRunningTime="2026-02-17 14:28:23.823158417 +0000 UTC m=+1330.166086686" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.153801 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.272617 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-config\") pod \"522206b4-5f50-46e4-a363-24021bd65471\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.272741 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfb6n\" (UniqueName: \"kubernetes.io/projected/522206b4-5f50-46e4-a363-24021bd65471-kube-api-access-hfb6n\") pod \"522206b4-5f50-46e4-a363-24021bd65471\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.272860 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-sb\") pod \"522206b4-5f50-46e4-a363-24021bd65471\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.272944 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-nb\") pod \"522206b4-5f50-46e4-a363-24021bd65471\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.272988 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-svc\") pod \"522206b4-5f50-46e4-a363-24021bd65471\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.273022 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-swift-storage-0\") pod \"522206b4-5f50-46e4-a363-24021bd65471\" (UID: \"522206b4-5f50-46e4-a363-24021bd65471\") " Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.283799 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522206b4-5f50-46e4-a363-24021bd65471-kube-api-access-hfb6n" (OuterVolumeSpecName: "kube-api-access-hfb6n") pod "522206b4-5f50-46e4-a363-24021bd65471" (UID: "522206b4-5f50-46e4-a363-24021bd65471"). InnerVolumeSpecName "kube-api-access-hfb6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.311074 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "522206b4-5f50-46e4-a363-24021bd65471" (UID: "522206b4-5f50-46e4-a363-24021bd65471"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.329658 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-config" (OuterVolumeSpecName: "config") pod "522206b4-5f50-46e4-a363-24021bd65471" (UID: "522206b4-5f50-46e4-a363-24021bd65471"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.333818 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "522206b4-5f50-46e4-a363-24021bd65471" (UID: "522206b4-5f50-46e4-a363-24021bd65471"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.335938 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "522206b4-5f50-46e4-a363-24021bd65471" (UID: "522206b4-5f50-46e4-a363-24021bd65471"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.359966 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "522206b4-5f50-46e4-a363-24021bd65471" (UID: "522206b4-5f50-46e4-a363-24021bd65471"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.379662 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.380083 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.380159 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.380216 4836 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.380271 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522206b4-5f50-46e4-a363-24021bd65471-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.380365 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfb6n\" (UniqueName: \"kubernetes.io/projected/522206b4-5f50-46e4-a363-24021bd65471-kube-api-access-hfb6n\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.564893 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" event={"ID":"522206b4-5f50-46e4-a363-24021bd65471","Type":"ContainerDied","Data":"fee297ba365480160ebc2531b71af26f97646ac6816136e490dca68ef994f4eb"} Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.565424 4836 scope.go:117] "RemoveContainer" containerID="96b7929b0efa57ddeb597b383fe5cd1d57bdd64461f3a7e920a20b1f3f965d47" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.565634 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mpdz8" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.635997 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6","Type":"ContainerStarted","Data":"869ce82d3359043cc4431b6028a34dabc445615d7136178d3f86f4e3032f0d97"} Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.663652 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" event={"ID":"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c","Type":"ContainerStarted","Data":"85a21ea6f28662473a5cbe42dfa68ac85c766a6f09753e438f2c37af7356f777"} Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.665029 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.681688 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3515b05-3f55-44fc-9578-ee0d73cb7382","Type":"ContainerStarted","Data":"4184adaaf005ab31a219f3203092826b82f06f8762db3d006dcd2fec9f1d8ea0"} Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.687264 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bdc657f6-lhdd4" event={"ID":"10f74a60-5438-45cd-a8e1-74ccc1c3b16a","Type":"ContainerStarted","Data":"d2098b2a7c4dcbee4fa27ea9bfa1c19e32c5f83e96aa663b877abb8284852c74"} Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.687463 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bdc657f6-lhdd4" event={"ID":"10f74a60-5438-45cd-a8e1-74ccc1c3b16a","Type":"ContainerStarted","Data":"b7a5e210ee7a505ae087f3c56329942b71db962383e4ae1693812dd8340169c8"} Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.689079 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.789330 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mpdz8"] Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.819514 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mpdz8"] Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.834327 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56bdc657f6-lhdd4" podStartSLOduration=11.834268717 podStartE2EDuration="11.834268717s" podCreationTimestamp="2026-02-17 14:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:24.769765424 +0000 UTC m=+1331.112693693" watchObservedRunningTime="2026-02-17 14:28:24.834268717 +0000 UTC m=+1331.177196996" Feb 17 14:28:24 crc kubenswrapper[4836]: I0217 14:28:24.858976 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" podStartSLOduration=11.858937989 podStartE2EDuration="11.858937989s" podCreationTimestamp="2026-02-17 14:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:24.80712148 +0000 UTC m=+1331.150049769" watchObservedRunningTime="2026-02-17 14:28:24.858937989 +0000 UTC m=+1331.201866258" Feb 17 14:28:25 crc kubenswrapper[4836]: I0217 14:28:25.707169 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6","Type":"ContainerStarted","Data":"0f30cc20795a1be11a63cd83ca82ef1f96bc339f29c98fa9ea79201d66ddd14b"} Feb 17 14:28:25 crc kubenswrapper[4836]: I0217 14:28:25.707310 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerName="glance-log" containerID="cri-o://869ce82d3359043cc4431b6028a34dabc445615d7136178d3f86f4e3032f0d97" gracePeriod=30 Feb 17 14:28:25 crc kubenswrapper[4836]: I0217 14:28:25.707399 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerName="glance-httpd" containerID="cri-o://0f30cc20795a1be11a63cd83ca82ef1f96bc339f29c98fa9ea79201d66ddd14b" gracePeriod=30 Feb 17 14:28:25 crc kubenswrapper[4836]: I0217 14:28:25.717341 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3515b05-3f55-44fc-9578-ee0d73cb7382","Type":"ContainerStarted","Data":"6ae7bcd92058237aae5f45aa971c6650154498599d3d666733d736f045d19bfc"} Feb 17 14:28:25 crc kubenswrapper[4836]: I0217 14:28:25.717853 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerName="glance-log" containerID="cri-o://4184adaaf005ab31a219f3203092826b82f06f8762db3d006dcd2fec9f1d8ea0" gracePeriod=30 Feb 17 14:28:25 crc kubenswrapper[4836]: I0217 14:28:25.717991 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerName="glance-httpd" containerID="cri-o://6ae7bcd92058237aae5f45aa971c6650154498599d3d666733d736f045d19bfc" gracePeriod=30 Feb 17 14:28:25 crc kubenswrapper[4836]: I0217 14:28:25.765137 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=29.765106596 podStartE2EDuration="29.765106596s" podCreationTimestamp="2026-02-17 14:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:25.746490109 +0000 UTC m=+1332.089418388" watchObservedRunningTime="2026-02-17 14:28:25.765106596 +0000 UTC m=+1332.108034865" Feb 17 14:28:25 crc kubenswrapper[4836]: I0217 14:28:25.804119 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.804093406 podStartE2EDuration="29.804093406s" podCreationTimestamp="2026-02-17 14:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:25.785395037 +0000 UTC m=+1332.128323316" watchObservedRunningTime="2026-02-17 14:28:25.804093406 +0000 UTC m=+1332.147021675" Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.650502 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="522206b4-5f50-46e4-a363-24021bd65471" path="/var/lib/kubelet/pods/522206b4-5f50-46e4-a363-24021bd65471/volumes" Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.752832 4836 generic.go:334] "Generic (PLEG): container finished" podID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerID="6ae7bcd92058237aae5f45aa971c6650154498599d3d666733d736f045d19bfc" exitCode=0 Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.753672 4836 generic.go:334] "Generic (PLEG): container finished" podID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerID="4184adaaf005ab31a219f3203092826b82f06f8762db3d006dcd2fec9f1d8ea0" exitCode=143 Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.752943 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3515b05-3f55-44fc-9578-ee0d73cb7382","Type":"ContainerDied","Data":"6ae7bcd92058237aae5f45aa971c6650154498599d3d666733d736f045d19bfc"} Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.753815 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3515b05-3f55-44fc-9578-ee0d73cb7382","Type":"ContainerDied","Data":"4184adaaf005ab31a219f3203092826b82f06f8762db3d006dcd2fec9f1d8ea0"} Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.761223 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6fec8667-7189-4e29-8362-37dd935d2db7","Type":"ContainerStarted","Data":"71b1cb8f78f78e8c4dd6692ccdd01ad2461897f8475fbee0b7f838d8c85e743a"} Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.761276 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6fec8667-7189-4e29-8362-37dd935d2db7","Type":"ContainerStarted","Data":"937e842c15cf5da92c6be36a32ba414a091df39b4e16414d5f646f11edcd1602"} Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.772314 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerStarted","Data":"adc3ef3643d684dbbbf0790a30dd752752d5a28971c3915143c0a6ec314bc365"} Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.786392 4836 generic.go:334] "Generic (PLEG): container finished" podID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerID="0f30cc20795a1be11a63cd83ca82ef1f96bc339f29c98fa9ea79201d66ddd14b" exitCode=0 Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.786444 4836 generic.go:334] "Generic (PLEG): container finished" podID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerID="869ce82d3359043cc4431b6028a34dabc445615d7136178d3f86f4e3032f0d97" exitCode=143 Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.788384 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6","Type":"ContainerDied","Data":"0f30cc20795a1be11a63cd83ca82ef1f96bc339f29c98fa9ea79201d66ddd14b"} Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.788468 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6","Type":"ContainerDied","Data":"869ce82d3359043cc4431b6028a34dabc445615d7136178d3f86f4e3032f0d97"} Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.814405 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=68.814369624 podStartE2EDuration="1m8.814369624s" podCreationTimestamp="2026-02-17 14:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:26.802274576 +0000 UTC m=+1333.145202855" watchObservedRunningTime="2026-02-17 14:28:26.814369624 +0000 UTC m=+1333.157297903" Feb 17 14:28:26 crc kubenswrapper[4836]: I0217 14:28:26.943473 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.088866 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-combined-ca-bundle\") pod \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.088969 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-scripts\") pod \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.089110 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-httpd-run\") pod \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.089164 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njnqq\" (UniqueName: \"kubernetes.io/projected/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-kube-api-access-njnqq\") pod \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.089267 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-config-data\") pod \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.089684 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.089762 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-logs\") pod \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\" (UID: \"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.091777 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-logs" (OuterVolumeSpecName: "logs") pod "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" (UID: "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.095929 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" (UID: "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.103120 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-scripts" (OuterVolumeSpecName: "scripts") pod "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" (UID: "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.112350 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-kube-api-access-njnqq" (OuterVolumeSpecName: "kube-api-access-njnqq") pod "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" (UID: "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6"). InnerVolumeSpecName "kube-api-access-njnqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.123868 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf" (OuterVolumeSpecName: "glance") pod "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" (UID: "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6"). InnerVolumeSpecName "pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.133407 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.179706 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" (UID: "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.195039 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-config-data" (OuterVolumeSpecName: "config-data") pod "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" (UID: "c16e95ef-bb15-4f5c-b53d-42ca0c183ed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.195350 4836 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") on node \"crc\" " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.195377 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.195389 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.195400 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.195409 4836 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.195419 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njnqq\" (UniqueName: \"kubernetes.io/projected/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-kube-api-access-njnqq\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.195427 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.250677 4836 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.250923 4836 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf") on node "crc" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.296587 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-scripts\") pod \"c3515b05-3f55-44fc-9578-ee0d73cb7382\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.296684 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-httpd-run\") pod \"c3515b05-3f55-44fc-9578-ee0d73cb7382\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.296873 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-config-data\") pod \"c3515b05-3f55-44fc-9578-ee0d73cb7382\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.297092 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"c3515b05-3f55-44fc-9578-ee0d73cb7382\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.297127 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-logs\") pod \"c3515b05-3f55-44fc-9578-ee0d73cb7382\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.297173 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-combined-ca-bundle\") pod \"c3515b05-3f55-44fc-9578-ee0d73cb7382\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.297346 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqt5t\" (UniqueName: \"kubernetes.io/projected/c3515b05-3f55-44fc-9578-ee0d73cb7382-kube-api-access-jqt5t\") pod \"c3515b05-3f55-44fc-9578-ee0d73cb7382\" (UID: \"c3515b05-3f55-44fc-9578-ee0d73cb7382\") " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.297575 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3515b05-3f55-44fc-9578-ee0d73cb7382" (UID: "c3515b05-3f55-44fc-9578-ee0d73cb7382"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.298001 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-logs" (OuterVolumeSpecName: "logs") pod "c3515b05-3f55-44fc-9578-ee0d73cb7382" (UID: "c3515b05-3f55-44fc-9578-ee0d73cb7382"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.298608 4836 reconciler_common.go:293] "Volume detached for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.298632 4836 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.298649 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3515b05-3f55-44fc-9578-ee0d73cb7382-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.304353 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3515b05-3f55-44fc-9578-ee0d73cb7382-kube-api-access-jqt5t" (OuterVolumeSpecName: "kube-api-access-jqt5t") pod "c3515b05-3f55-44fc-9578-ee0d73cb7382" (UID: "c3515b05-3f55-44fc-9578-ee0d73cb7382"). InnerVolumeSpecName "kube-api-access-jqt5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.305438 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-scripts" (OuterVolumeSpecName: "scripts") pod "c3515b05-3f55-44fc-9578-ee0d73cb7382" (UID: "c3515b05-3f55-44fc-9578-ee0d73cb7382"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.321031 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34" (OuterVolumeSpecName: "glance") pod "c3515b05-3f55-44fc-9578-ee0d73cb7382" (UID: "c3515b05-3f55-44fc-9578-ee0d73cb7382"). InnerVolumeSpecName "pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.344910 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3515b05-3f55-44fc-9578-ee0d73cb7382" (UID: "c3515b05-3f55-44fc-9578-ee0d73cb7382"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.364587 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-config-data" (OuterVolumeSpecName: "config-data") pod "c3515b05-3f55-44fc-9578-ee0d73cb7382" (UID: "c3515b05-3f55-44fc-9578-ee0d73cb7382"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.402011 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.402544 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.402678 4836 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") on node \"crc\" " Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.414439 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3515b05-3f55-44fc-9578-ee0d73cb7382-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.414826 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqt5t\" (UniqueName: \"kubernetes.io/projected/c3515b05-3f55-44fc-9578-ee0d73cb7382-kube-api-access-jqt5t\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.431677 4836 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.432577 4836 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34") on node "crc" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.517797 4836 reconciler_common.go:293] "Volume detached for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.802920 4836 generic.go:334] "Generic (PLEG): container finished" podID="10331926-261d-4e44-a8c2-89846903ca12" containerID="0a4b8ba8b2087b1a38486d6f6172aee2da2f8fb8e22feee2e93bb22306b6558e" exitCode=0 Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.803031 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmgps" event={"ID":"10331926-261d-4e44-a8c2-89846903ca12","Type":"ContainerDied","Data":"0a4b8ba8b2087b1a38486d6f6172aee2da2f8fb8e22feee2e93bb22306b6558e"} Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.805835 4836 generic.go:334] "Generic (PLEG): container finished" podID="18361bc2-5db1-4611-be18-38593e0b5d5d" containerID="13ef4f24a42269dbbf22aa927159da757007caa607e5236e1441cff6b685fe12" exitCode=0 Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.805904 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g9l4s" event={"ID":"18361bc2-5db1-4611-be18-38593e0b5d5d","Type":"ContainerDied","Data":"13ef4f24a42269dbbf22aa927159da757007caa607e5236e1441cff6b685fe12"} Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.809522 4836 generic.go:334] "Generic (PLEG): container finished" podID="1fe4b42c-afbf-41e1-8035-5fffb156eadc" containerID="705f230fd2d44c1059294c17cc5410cef58dcabc1573c4e7f4f531d00aad46ec" exitCode=0 Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.809623 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pdhxs" event={"ID":"1fe4b42c-afbf-41e1-8035-5fffb156eadc","Type":"ContainerDied","Data":"705f230fd2d44c1059294c17cc5410cef58dcabc1573c4e7f4f531d00aad46ec"} Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.814327 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.814404 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c16e95ef-bb15-4f5c-b53d-42ca0c183ed6","Type":"ContainerDied","Data":"1bdfb1c3c1f902411f2380ceedd49dc7958dbb9a04d7b4060c81c560dbbd7e40"} Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.814484 4836 scope.go:117] "RemoveContainer" containerID="0f30cc20795a1be11a63cd83ca82ef1f96bc339f29c98fa9ea79201d66ddd14b" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.819417 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.819450 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3515b05-3f55-44fc-9578-ee0d73cb7382","Type":"ContainerDied","Data":"1c06a664d1e654a6bd86f236b173b45cd29b036aab35504b6bdf0b6a6af440cb"} Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.934539 4836 scope.go:117] "RemoveContainer" containerID="869ce82d3359043cc4431b6028a34dabc445615d7136178d3f86f4e3032f0d97" Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.934767 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.947142 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.961871 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:28:27 crc kubenswrapper[4836]: I0217 14:28:27.995498 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.012063 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:28:28 crc kubenswrapper[4836]: E0217 14:28:28.012717 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerName="glance-httpd" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.012747 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerName="glance-httpd" Feb 17 14:28:28 crc kubenswrapper[4836]: E0217 14:28:28.012770 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522206b4-5f50-46e4-a363-24021bd65471" containerName="init" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.012779 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="522206b4-5f50-46e4-a363-24021bd65471" containerName="init" Feb 17 14:28:28 crc kubenswrapper[4836]: E0217 14:28:28.012800 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerName="glance-httpd" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.012809 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerName="glance-httpd" Feb 17 14:28:28 crc kubenswrapper[4836]: E0217 14:28:28.012820 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerName="glance-log" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.012828 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerName="glance-log" Feb 17 14:28:28 crc kubenswrapper[4836]: E0217 14:28:28.012845 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerName="glance-log" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.012854 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerName="glance-log" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.013119 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="522206b4-5f50-46e4-a363-24021bd65471" containerName="init" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.013149 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerName="glance-log" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.013163 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" containerName="glance-httpd" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.013171 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerName="glance-log" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.013192 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" containerName="glance-httpd" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.014673 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.020593 4836 scope.go:117] "RemoveContainer" containerID="6ae7bcd92058237aae5f45aa971c6650154498599d3d666733d736f045d19bfc" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.028753 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.029381 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.029558 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.029721 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qbbvn" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.063085 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.084637 4836 scope.go:117] "RemoveContainer" containerID="4184adaaf005ab31a219f3203092826b82f06f8762db3d006dcd2fec9f1d8ea0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.107749 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.113553 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.118808 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.119941 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.148045 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.156981 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.157222 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.161569 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.161630 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.161783 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6smd\" (UniqueName: \"kubernetes.io/projected/9fc032cb-3063-4e39-a91f-ccc89defe9c4-kube-api-access-x6smd\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.161858 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.161932 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.161952 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.264703 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6smd\" (UniqueName: \"kubernetes.io/projected/9fc032cb-3063-4e39-a91f-ccc89defe9c4-kube-api-access-x6smd\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.264810 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.264846 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.264863 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.265379 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cbj8\" (UniqueName: \"kubernetes.io/projected/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-kube-api-access-8cbj8\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.265421 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.265914 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.266086 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.266325 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.266583 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.266770 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.266812 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.266894 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.266895 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.266924 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.267160 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-logs\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.267324 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-logs\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.267324 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.273816 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.275603 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.275690 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.276277 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.276328 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20e9fd566d593755c515c6f55c386051b7cebe94721b27d85313d87ab22fcec4/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.278877 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.292340 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6smd\" (UniqueName: \"kubernetes.io/projected/9fc032cb-3063-4e39-a91f-ccc89defe9c4-kube-api-access-x6smd\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.338064 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.362775 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.369579 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-logs\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.369678 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.369780 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cbj8\" (UniqueName: \"kubernetes.io/projected/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-kube-api-access-8cbj8\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.369801 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.369856 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.369913 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.369934 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.369957 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.375105 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-logs\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.375280 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.380200 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.381191 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.383722 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.388105 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.388174 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1c05c143b5a67726d067625f4c5da25dac4624853da03b1088e3ef561519b77/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.388392 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.392889 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cbj8\" (UniqueName: \"kubernetes.io/projected/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-kube-api-access-8cbj8\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.435365 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.456062 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.623380 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c16e95ef-bb15-4f5c-b53d-42ca0c183ed6" path="/var/lib/kubelet/pods/c16e95ef-bb15-4f5c-b53d-42ca0c183ed6/volumes" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.625856 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3515b05-3f55-44fc-9578-ee0d73cb7382" path="/var/lib/kubelet/pods/c3515b05-3f55-44fc-9578-ee0d73cb7382/volumes" Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.854348 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqwhc" event={"ID":"8185c649-f1ad-4230-830d-07d002e5b358","Type":"ContainerStarted","Data":"ff24c89536ae06cf6a0fbffcb68050de3e8ed22356c912b4e7e87afbef99480d"} Feb 17 14:28:28 crc kubenswrapper[4836]: I0217 14:28:28.904412 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qqwhc" podStartSLOduration=5.757878491 podStartE2EDuration="52.904379829s" podCreationTimestamp="2026-02-17 14:27:36 +0000 UTC" firstStartedPulling="2026-02-17 14:27:39.956074452 +0000 UTC m=+1286.299002721" lastFinishedPulling="2026-02-17 14:28:27.10257579 +0000 UTC m=+1333.445504059" observedRunningTime="2026-02-17 14:28:28.881213309 +0000 UTC m=+1335.224141598" watchObservedRunningTime="2026-02-17 14:28:28.904379829 +0000 UTC m=+1335.247308098" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.036997 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.056038 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.296097 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.316573 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.417195 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9hcq9"] Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.417536 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" podUID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerName="dnsmasq-dns" containerID="cri-o://2fca778edd45bdfb866af7aaa0fc6f307d910a96cf1cd5ecfab2d14db35f72e8" gracePeriod=10 Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.747985 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pdhxs" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.753020 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.906656 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.913580 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c29f84b9-3879-4fc6-b2aa-e334bd08f24e","Type":"ContainerStarted","Data":"a73e6cf975755957f05fddc903522d5d75b3eb7f41eb5a42c5ad06b115f44634"} Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.951812 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-combined-ca-bundle\") pod \"10331926-261d-4e44-a8c2-89846903ca12\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.951987 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fe4b42c-afbf-41e1-8035-5fffb156eadc-logs\") pod \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952025 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k78mf\" (UniqueName: \"kubernetes.io/projected/1fe4b42c-afbf-41e1-8035-5fffb156eadc-kube-api-access-k78mf\") pod \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952082 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-combined-ca-bundle\") pod \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952154 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-config-data\") pod \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952183 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-scripts\") pod \"10331926-261d-4e44-a8c2-89846903ca12\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952331 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-config-data\") pod \"10331926-261d-4e44-a8c2-89846903ca12\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952374 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-credential-keys\") pod \"10331926-261d-4e44-a8c2-89846903ca12\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952486 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-scripts\") pod \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\" (UID: \"1fe4b42c-afbf-41e1-8035-5fffb156eadc\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952580 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fflgh\" (UniqueName: \"kubernetes.io/projected/10331926-261d-4e44-a8c2-89846903ca12-kube-api-access-fflgh\") pod \"10331926-261d-4e44-a8c2-89846903ca12\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.952617 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-fernet-keys\") pod \"10331926-261d-4e44-a8c2-89846903ca12\" (UID: \"10331926-261d-4e44-a8c2-89846903ca12\") " Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.959505 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fc032cb-3063-4e39-a91f-ccc89defe9c4","Type":"ContainerStarted","Data":"38f3541a8bef919fb1afd541589fd4540ccef699d3e6a2e7f1dcb0859f09ea45"} Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.967081 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe4b42c-afbf-41e1-8035-5fffb156eadc-logs" (OuterVolumeSpecName: "logs") pod "1fe4b42c-afbf-41e1-8035-5fffb156eadc" (UID: "1fe4b42c-afbf-41e1-8035-5fffb156eadc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.977480 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "10331926-261d-4e44-a8c2-89846903ca12" (UID: "10331926-261d-4e44-a8c2-89846903ca12"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.977636 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "10331926-261d-4e44-a8c2-89846903ca12" (UID: "10331926-261d-4e44-a8c2-89846903ca12"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.983864 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmgps" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.984074 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmgps" event={"ID":"10331926-261d-4e44-a8c2-89846903ca12","Type":"ContainerDied","Data":"02c724d11382ee98b69e6abaefd40a4e20c1b972def951d53202ec6f8b2b38f2"} Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.984148 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c724d11382ee98b69e6abaefd40a4e20c1b972def951d53202ec6f8b2b38f2" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.986968 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe4b42c-afbf-41e1-8035-5fffb156eadc-kube-api-access-k78mf" (OuterVolumeSpecName: "kube-api-access-k78mf") pod "1fe4b42c-afbf-41e1-8035-5fffb156eadc" (UID: "1fe4b42c-afbf-41e1-8035-5fffb156eadc"). InnerVolumeSpecName "kube-api-access-k78mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.987515 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-scripts" (OuterVolumeSpecName: "scripts") pod "1fe4b42c-afbf-41e1-8035-5fffb156eadc" (UID: "1fe4b42c-afbf-41e1-8035-5fffb156eadc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.989723 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-scripts" (OuterVolumeSpecName: "scripts") pod "10331926-261d-4e44-a8c2-89846903ca12" (UID: "10331926-261d-4e44-a8c2-89846903ca12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.993952 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g9l4s" event={"ID":"18361bc2-5db1-4611-be18-38593e0b5d5d","Type":"ContainerDied","Data":"b92684999081dd7ebe3c5f048ea5d9a568a0e24a28001ce5ab97b2282351bbcb"} Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.994042 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b92684999081dd7ebe3c5f048ea5d9a568a0e24a28001ce5ab97b2282351bbcb" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.994211 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g9l4s" Feb 17 14:28:29 crc kubenswrapper[4836]: I0217 14:28:29.996569 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10331926-261d-4e44-a8c2-89846903ca12-kube-api-access-fflgh" (OuterVolumeSpecName: "kube-api-access-fflgh") pod "10331926-261d-4e44-a8c2-89846903ca12" (UID: "10331926-261d-4e44-a8c2-89846903ca12"). InnerVolumeSpecName "kube-api-access-fflgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.058461 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqtgf\" (UniqueName: \"kubernetes.io/projected/18361bc2-5db1-4611-be18-38593e0b5d5d-kube-api-access-sqtgf\") pod \"18361bc2-5db1-4611-be18-38593e0b5d5d\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.058597 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-combined-ca-bundle\") pod \"18361bc2-5db1-4611-be18-38593e0b5d5d\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.058781 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-db-sync-config-data\") pod \"18361bc2-5db1-4611-be18-38593e0b5d5d\" (UID: \"18361bc2-5db1-4611-be18-38593e0b5d5d\") " Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.059598 4836 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.059617 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.059627 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fflgh\" (UniqueName: \"kubernetes.io/projected/10331926-261d-4e44-a8c2-89846903ca12-kube-api-access-fflgh\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.059639 4836 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.059653 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fe4b42c-afbf-41e1-8035-5fffb156eadc-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.059663 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k78mf\" (UniqueName: \"kubernetes.io/projected/1fe4b42c-afbf-41e1-8035-5fffb156eadc-kube-api-access-k78mf\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.059675 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.082547 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10331926-261d-4e44-a8c2-89846903ca12" (UID: "10331926-261d-4e44-a8c2-89846903ca12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.082917 4836 generic.go:334] "Generic (PLEG): container finished" podID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerID="2fca778edd45bdfb866af7aaa0fc6f307d910a96cf1cd5ecfab2d14db35f72e8" exitCode=0 Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.083064 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" event={"ID":"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9","Type":"ContainerDied","Data":"2fca778edd45bdfb866af7aaa0fc6f307d910a96cf1cd5ecfab2d14db35f72e8"} Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.083913 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-config-data" (OuterVolumeSpecName: "config-data") pod "10331926-261d-4e44-a8c2-89846903ca12" (UID: "10331926-261d-4e44-a8c2-89846903ca12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.100391 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pdhxs" event={"ID":"1fe4b42c-afbf-41e1-8035-5fffb156eadc","Type":"ContainerDied","Data":"63af66d6a1d8223670f744aeb2ae7fb99f6f7344d8ab31ed483859da53a657a7"} Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.100446 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63af66d6a1d8223670f744aeb2ae7fb99f6f7344d8ab31ed483859da53a657a7" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.100585 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pdhxs" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.121126 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55d7557768-wvvpt"] Feb 17 14:28:30 crc kubenswrapper[4836]: E0217 14:28:30.121827 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe4b42c-afbf-41e1-8035-5fffb156eadc" containerName="placement-db-sync" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.121848 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe4b42c-afbf-41e1-8035-5fffb156eadc" containerName="placement-db-sync" Feb 17 14:28:30 crc kubenswrapper[4836]: E0217 14:28:30.121857 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18361bc2-5db1-4611-be18-38593e0b5d5d" containerName="barbican-db-sync" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.121864 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="18361bc2-5db1-4611-be18-38593e0b5d5d" containerName="barbican-db-sync" Feb 17 14:28:30 crc kubenswrapper[4836]: E0217 14:28:30.121881 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10331926-261d-4e44-a8c2-89846903ca12" containerName="keystone-bootstrap" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.121888 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="10331926-261d-4e44-a8c2-89846903ca12" containerName="keystone-bootstrap" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.122097 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe4b42c-afbf-41e1-8035-5fffb156eadc" containerName="placement-db-sync" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.122115 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="10331926-261d-4e44-a8c2-89846903ca12" containerName="keystone-bootstrap" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.122133 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="18361bc2-5db1-4611-be18-38593e0b5d5d" containerName="barbican-db-sync" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.123667 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.129559 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.129859 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.129944 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "18361bc2-5db1-4611-be18-38593e0b5d5d" (UID: "18361bc2-5db1-4611-be18-38593e0b5d5d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.134759 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-config-data" (OuterVolumeSpecName: "config-data") pod "1fe4b42c-afbf-41e1-8035-5fffb156eadc" (UID: "1fe4b42c-afbf-41e1-8035-5fffb156eadc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.160350 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18361bc2-5db1-4611-be18-38593e0b5d5d-kube-api-access-sqtgf" (OuterVolumeSpecName: "kube-api-access-sqtgf") pod "18361bc2-5db1-4611-be18-38593e0b5d5d" (UID: "18361bc2-5db1-4611-be18-38593e0b5d5d"). InnerVolumeSpecName "kube-api-access-sqtgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.168821 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.168877 4836 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.168891 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.168903 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqtgf\" (UniqueName: \"kubernetes.io/projected/18361bc2-5db1-4611-be18-38593e0b5d5d-kube-api-access-sqtgf\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.168915 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10331926-261d-4e44-a8c2-89846903ca12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.171604 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-78c4d587b5-cqhdl"] Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.174638 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.179427 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.180428 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.190686 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18361bc2-5db1-4611-be18-38593e0b5d5d" (UID: "18361bc2-5db1-4611-be18-38593e0b5d5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.207534 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-78c4d587b5-cqhdl"] Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.228342 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fe4b42c-afbf-41e1-8035-5fffb156eadc" (UID: "1fe4b42c-afbf-41e1-8035-5fffb156eadc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279181 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-public-tls-certs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279366 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-public-tls-certs\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279515 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-internal-tls-certs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279545 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-internal-tls-certs\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279576 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-scripts\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279677 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mjt\" (UniqueName: \"kubernetes.io/projected/f2f9acba-3f54-43b6-9461-31cba0cc954b-kube-api-access-99mjt\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279723 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-config-data\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279755 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-combined-ca-bundle\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279799 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-credential-keys\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279825 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfjpm\" (UniqueName: \"kubernetes.io/projected/21c73844-3235-4a12-9f77-901ba8614e11-kube-api-access-nfjpm\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279855 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-scripts\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279909 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-fernet-keys\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.279961 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21c73844-3235-4a12-9f77-901ba8614e11-logs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.280105 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-combined-ca-bundle\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.280176 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-config-data\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.280816 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18361bc2-5db1-4611-be18-38593e0b5d5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.280840 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe4b42c-afbf-41e1-8035-5fffb156eadc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.338602 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55d7557768-wvvpt"] Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391149 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-internal-tls-certs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391246 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-internal-tls-certs\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391277 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-scripts\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391398 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99mjt\" (UniqueName: \"kubernetes.io/projected/f2f9acba-3f54-43b6-9461-31cba0cc954b-kube-api-access-99mjt\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391442 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-config-data\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391465 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-combined-ca-bundle\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391504 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-credential-keys\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391528 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfjpm\" (UniqueName: \"kubernetes.io/projected/21c73844-3235-4a12-9f77-901ba8614e11-kube-api-access-nfjpm\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391581 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-scripts\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391724 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-fernet-keys\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.391942 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21c73844-3235-4a12-9f77-901ba8614e11-logs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.392029 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-combined-ca-bundle\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.392087 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-config-data\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.392172 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-public-tls-certs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.392238 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-public-tls-certs\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.396074 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21c73844-3235-4a12-9f77-901ba8614e11-logs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.399508 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-credential-keys\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.409736 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-config-data\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.412212 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-scripts\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.413510 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-public-tls-certs\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.420878 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-internal-tls-certs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.423411 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-public-tls-certs\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.432402 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-scripts\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.446064 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-fernet-keys\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.447025 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-combined-ca-bundle\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.447208 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-internal-tls-certs\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.447543 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-config-data\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.447628 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f9acba-3f54-43b6-9461-31cba0cc954b-combined-ca-bundle\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.455145 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfjpm\" (UniqueName: \"kubernetes.io/projected/21c73844-3235-4a12-9f77-901ba8614e11-kube-api-access-nfjpm\") pod \"placement-55d7557768-wvvpt\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.473468 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99mjt\" (UniqueName: \"kubernetes.io/projected/f2f9acba-3f54-43b6-9461-31cba0cc954b-kube-api-access-99mjt\") pod \"keystone-78c4d587b5-cqhdl\" (UID: \"f2f9acba-3f54-43b6-9461-31cba0cc954b\") " pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.486238 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.542777 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bc958ddf6-kh2rq"] Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.544127 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.550378 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.568022 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc958ddf6-kh2rq"] Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.712395 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-combined-ca-bundle\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.712820 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-logs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.712868 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-internal-tls-certs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.712914 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-scripts\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.712952 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-config-data\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.713217 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-public-tls-certs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.713345 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75n2\" (UniqueName: \"kubernetes.io/projected/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-kube-api-access-q75n2\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.815313 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-public-tls-certs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.815414 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q75n2\" (UniqueName: \"kubernetes.io/projected/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-kube-api-access-q75n2\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.815457 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-combined-ca-bundle\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.815588 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-logs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.815624 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-internal-tls-certs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.815654 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-scripts\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.815687 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-config-data\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.827103 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-logs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.827346 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-config-data\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.827449 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-internal-tls-certs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.839480 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-scripts\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.849727 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-combined-ca-bundle\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.856543 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-public-tls-certs\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.873438 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q75n2\" (UniqueName: \"kubernetes.io/projected/42c3b1e3-728a-4bd8-9669-bfe1656b6de2-kube-api-access-q75n2\") pod \"placement-bc958ddf6-kh2rq\" (UID: \"42c3b1e3-728a-4bd8-9669-bfe1656b6de2\") " pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:30 crc kubenswrapper[4836]: I0217 14:28:30.993129 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.115748 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fc032cb-3063-4e39-a91f-ccc89defe9c4","Type":"ContainerStarted","Data":"2d37a99072f4fb6a9bc38dee8c6986d96ef5977cd1d2c3dca6d3d95cb5f3bcee"} Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.326900 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6567fb9c77-xcq7p"] Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.341190 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.356618 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.356995 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.366042 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fkh7w" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.386354 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6567fb9c77-xcq7p"] Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.431860 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-68fd77ffbb-m5r5c"] Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.434035 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.444972 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.446570 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-config-data\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.446617 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-config-data-custom\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.446648 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-logs\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.446710 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-combined-ca-bundle\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.446782 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44t8g\" (UniqueName: \"kubernetes.io/projected/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-kube-api-access-44t8g\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.460578 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-68fd77ffbb-m5r5c"] Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.548822 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jhnl\" (UniqueName: \"kubernetes.io/projected/f79d706e-2d22-49c6-acb5-dc3f130ab102-kube-api-access-4jhnl\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.548879 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-config-data-custom\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.548917 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44t8g\" (UniqueName: \"kubernetes.io/projected/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-kube-api-access-44t8g\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.548940 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-config-data\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.549019 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79d706e-2d22-49c6-acb5-dc3f130ab102-logs\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.549060 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-config-data\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.549082 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-config-data-custom\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.549112 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-logs\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.549181 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-combined-ca-bundle\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.549209 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-combined-ca-bundle\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.551269 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-logs\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.561503 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-config-data\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.570251 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-m9ds8"] Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.573417 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.583502 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-combined-ca-bundle\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.588100 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-config-data-custom\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.622448 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44t8g\" (UniqueName: \"kubernetes.io/projected/bf33e52a-365f-4ccc-8352-f4c7f8e2aebd-kube-api-access-44t8g\") pod \"barbican-worker-6567fb9c77-xcq7p\" (UID: \"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd\") " pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.630144 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-m9ds8"] Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.651680 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-svc\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.651776 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79d706e-2d22-49c6-acb5-dc3f130ab102-logs\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.652947 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.653019 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.653060 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spw5p\" (UniqueName: \"kubernetes.io/projected/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-kube-api-access-spw5p\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.653138 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.653474 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-combined-ca-bundle\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.653542 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jhnl\" (UniqueName: \"kubernetes.io/projected/f79d706e-2d22-49c6-acb5-dc3f130ab102-kube-api-access-4jhnl\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.653574 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-config-data-custom\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.653603 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-config\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.653636 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-config-data\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.668447 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79d706e-2d22-49c6-acb5-dc3f130ab102-logs\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.677686 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-combined-ca-bundle\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.678685 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-config-data\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.694861 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jhnl\" (UniqueName: \"kubernetes.io/projected/f79d706e-2d22-49c6-acb5-dc3f130ab102-kube-api-access-4jhnl\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.704393 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f79d706e-2d22-49c6-acb5-dc3f130ab102-config-data-custom\") pod \"barbican-keystone-listener-68fd77ffbb-m5r5c\" (UID: \"f79d706e-2d22-49c6-acb5-dc3f130ab102\") " pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.706458 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6567fb9c77-xcq7p" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.755387 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-svc\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.756471 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-svc\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.757212 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.757260 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.757349 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.759212 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-649d9995c8-rcxvp"] Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.761420 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.763526 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spw5p\" (UniqueName: \"kubernetes.io/projected/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-kube-api-access-spw5p\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.765735 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.766051 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.766367 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.766880 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.773778 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-config\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.775931 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-config\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.800856 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.819174 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spw5p\" (UniqueName: \"kubernetes.io/projected/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-kube-api-access-spw5p\") pod \"dnsmasq-dns-85ff748b95-m9ds8\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.833596 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-649d9995c8-rcxvp"] Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.886549 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsjs\" (UniqueName: \"kubernetes.io/projected/ed247b9d-af54-401e-80a3-82d18772f29d-kube-api-access-slsjs\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.886640 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-combined-ca-bundle\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.886825 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.886987 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data-custom\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.887189 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed247b9d-af54-401e-80a3-82d18772f29d-logs\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.946849 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.991196 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slsjs\" (UniqueName: \"kubernetes.io/projected/ed247b9d-af54-401e-80a3-82d18772f29d-kube-api-access-slsjs\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.991272 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-combined-ca-bundle\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.991337 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.991389 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data-custom\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.991464 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed247b9d-af54-401e-80a3-82d18772f29d-logs\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.991958 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed247b9d-af54-401e-80a3-82d18772f29d-logs\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.997810 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data-custom\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:31 crc kubenswrapper[4836]: I0217 14:28:31.998107 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-combined-ca-bundle\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:32 crc kubenswrapper[4836]: I0217 14:28:32.001280 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:32 crc kubenswrapper[4836]: I0217 14:28:32.019058 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slsjs\" (UniqueName: \"kubernetes.io/projected/ed247b9d-af54-401e-80a3-82d18772f29d-kube-api-access-slsjs\") pod \"barbican-api-649d9995c8-rcxvp\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:32 crc kubenswrapper[4836]: I0217 14:28:32.254024 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:33 crc kubenswrapper[4836]: I0217 14:28:33.165602 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c29f84b9-3879-4fc6-b2aa-e334bd08f24e","Type":"ContainerStarted","Data":"4231e0f0134e5c8db2d1379ad611e9d1ddd911c706b7c534c46f5a480fa7035b"} Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.031742 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.047544 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.190814 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.691012 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dc9c9fdbb-zxjj6"] Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.693048 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.695623 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.696257 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.709314 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dc9c9fdbb-zxjj6"] Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.873883 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-logs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.873960 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-config-data-custom\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.873983 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-internal-tls-certs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.874233 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-combined-ca-bundle\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.874342 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-config-data\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.874743 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-public-tls-certs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.874788 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bpt\" (UniqueName: \"kubernetes.io/projected/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-kube-api-access-v4bpt\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.977125 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-logs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.977267 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-config-data-custom\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.977320 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-internal-tls-certs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.977376 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-combined-ca-bundle\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.977403 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-config-data\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.977628 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-public-tls-certs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.977675 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bpt\" (UniqueName: \"kubernetes.io/projected/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-kube-api-access-v4bpt\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.977725 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-logs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.984283 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-combined-ca-bundle\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.985030 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-config-data\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.986077 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-public-tls-certs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.986717 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-config-data-custom\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.988941 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-internal-tls-certs\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:34 crc kubenswrapper[4836]: I0217 14:28:34.999073 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bpt\" (UniqueName: \"kubernetes.io/projected/62b902ba-6ba2-48f3-a6dc-652fd1d6d58c-kube-api-access-v4bpt\") pod \"barbican-api-7dc9c9fdbb-zxjj6\" (UID: \"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c\") " pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:35 crc kubenswrapper[4836]: I0217 14:28:35.023323 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:35 crc kubenswrapper[4836]: I0217 14:28:35.997543 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.117400 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-sb\") pod \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.117520 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-config\") pod \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.119755 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-swift-storage-0\") pod \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.119917 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-nb\") pod \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.120048 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n78t2\" (UniqueName: \"kubernetes.io/projected/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-kube-api-access-n78t2\") pod \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.120097 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-svc\") pod \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\" (UID: \"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9\") " Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.127147 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-kube-api-access-n78t2" (OuterVolumeSpecName: "kube-api-access-n78t2") pod "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" (UID: "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9"). InnerVolumeSpecName "kube-api-access-n78t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.183596 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" (UID: "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.188476 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-config" (OuterVolumeSpecName: "config") pod "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" (UID: "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.193541 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" (UID: "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.220933 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" event={"ID":"ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9","Type":"ContainerDied","Data":"e9f16c54dee6fca57cba69a1f24712669edc03c2f1b74e5ff682993352dbd1af"} Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.221018 4836 scope.go:117] "RemoveContainer" containerID="2fca778edd45bdfb866af7aaa0fc6f307d910a96cf1cd5ecfab2d14db35f72e8" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.221393 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.225065 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n78t2\" (UniqueName: \"kubernetes.io/projected/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-kube-api-access-n78t2\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.225105 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.225116 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.225125 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.252009 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" (UID: "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.287943 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" (UID: "ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.327494 4836 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.327527 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.591653 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9hcq9"] Feb 17 14:28:36 crc kubenswrapper[4836]: I0217 14:28:36.601664 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9hcq9"] Feb 17 14:28:37 crc kubenswrapper[4836]: I0217 14:28:37.644699 4836 scope.go:117] "RemoveContainer" containerID="99b7d1e9f2cb717570cc4209028495f2ccc23c4beb025f8110935cc03d58feb9" Feb 17 14:28:38 crc kubenswrapper[4836]: I0217 14:28:38.249461 4836 generic.go:334] "Generic (PLEG): container finished" podID="8185c649-f1ad-4230-830d-07d002e5b358" containerID="ff24c89536ae06cf6a0fbffcb68050de3e8ed22356c912b4e7e87afbef99480d" exitCode=0 Feb 17 14:28:38 crc kubenswrapper[4836]: I0217 14:28:38.249570 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqwhc" event={"ID":"8185c649-f1ad-4230-830d-07d002e5b358","Type":"ContainerDied","Data":"ff24c89536ae06cf6a0fbffcb68050de3e8ed22356c912b4e7e87afbef99480d"} Feb 17 14:28:38 crc kubenswrapper[4836]: I0217 14:28:38.581377 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" path="/var/lib/kubelet/pods/ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9/volumes" Feb 17 14:28:38 crc kubenswrapper[4836]: I0217 14:28:38.943423 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-9hcq9" podUID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.121379 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dc9c9fdbb-zxjj6"] Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.131393 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-649d9995c8-rcxvp"] Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.141555 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-78c4d587b5-cqhdl"] Feb 17 14:28:39 crc kubenswrapper[4836]: W0217 14:28:39.158755 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21c73844_3235_4a12_9f77_901ba8614e11.slice/crio-bbe789caf6ed33cc607fbf4e010b5eb03468b6cfaedd4af371c447ef9c0fa67b WatchSource:0}: Error finding container bbe789caf6ed33cc607fbf4e010b5eb03468b6cfaedd4af371c447ef9c0fa67b: Status 404 returned error can't find the container with id bbe789caf6ed33cc607fbf4e010b5eb03468b6cfaedd4af371c447ef9c0fa67b Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.175532 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55d7557768-wvvpt"] Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.187625 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc958ddf6-kh2rq"] Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.224544 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-68fd77ffbb-m5r5c"] Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.224839 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6567fb9c77-xcq7p"] Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.260064 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-m9ds8"] Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.299727 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d7557768-wvvpt" event={"ID":"21c73844-3235-4a12-9f77-901ba8614e11","Type":"ContainerStarted","Data":"bbe789caf6ed33cc607fbf4e010b5eb03468b6cfaedd4af371c447ef9c0fa67b"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.310938 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc958ddf6-kh2rq" event={"ID":"42c3b1e3-728a-4bd8-9669-bfe1656b6de2","Type":"ContainerStarted","Data":"b84d4e5903eaf939e6e9df46d9fb1bb6356ca308386cbbf38e24f231a92fd785"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.314571 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c29f84b9-3879-4fc6-b2aa-e334bd08f24e","Type":"ContainerStarted","Data":"ffac93583d3a46218a79cd0eec11b0e9213bdce6e0622ee8ec1b1030a56cebbf"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.320079 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fc032cb-3063-4e39-a91f-ccc89defe9c4","Type":"ContainerStarted","Data":"253884c8bfee6f38dc03fef1da6c5e47b92d31a3b1592567360ef3f04d7144a9"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.322001 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" event={"ID":"f79d706e-2d22-49c6-acb5-dc3f130ab102","Type":"ContainerStarted","Data":"01bbc5f7898f61cdcdfc6bcc497fb9d3899fe8249745e1a2740b884ba8f14e3e"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.327410 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" event={"ID":"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c","Type":"ContainerStarted","Data":"edc3f4ab33e4ac096c5f5d2d06d6b958f1361e9414fe6db97b0070eb37f81b3b"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.347439 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6567fb9c77-xcq7p" event={"ID":"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd","Type":"ContainerStarted","Data":"a18fab57f2f011fbaa8104cb37092948da9f782eda5389fddb0e9e15d016b797"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.349431 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.349354837 podStartE2EDuration="12.349354837s" podCreationTimestamp="2026-02-17 14:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:39.337651888 +0000 UTC m=+1345.680580167" watchObservedRunningTime="2026-02-17 14:28:39.349354837 +0000 UTC m=+1345.692283126" Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.350545 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" event={"ID":"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0","Type":"ContainerStarted","Data":"769356b8227ec8df8f5b18dca0f8472d3df22108be6b842885971987f0b77c6e"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.367633 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerStarted","Data":"b11cf843196ed96ab329470f8fb90c845e937e84667798d3853568520da77e41"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.385736 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-pvljf" event={"ID":"4e016162-2025-44ad-989d-ce71d9f8f9bf","Type":"ContainerStarted","Data":"fc7f81c47e20cce7a74c227545b963bd61d6dadbccf7dacfaa97a9b912354775"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.400468 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-649d9995c8-rcxvp" event={"ID":"ed247b9d-af54-401e-80a3-82d18772f29d","Type":"ContainerStarted","Data":"569d3d6ec6caf0a5240f5c4b4d890d8ae7a0c22c3878dcb1e8c71559ae9f5a26"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.402480 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-78c4d587b5-cqhdl" event={"ID":"f2f9acba-3f54-43b6-9461-31cba0cc954b","Type":"ContainerStarted","Data":"3ae7c7cdf4ba5c76850b19dacd65afe9e21b18d8d278aea8459f693c9b38a7d0"} Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.423857 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.423829491 podStartE2EDuration="12.423829491s" podCreationTimestamp="2026-02-17 14:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:39.3778131 +0000 UTC m=+1345.720741379" watchObservedRunningTime="2026-02-17 14:28:39.423829491 +0000 UTC m=+1345.766757760" Feb 17 14:28:39 crc kubenswrapper[4836]: I0217 14:28:39.429971 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-pvljf" podStartSLOduration=5.486771198 podStartE2EDuration="1m3.429945457s" podCreationTimestamp="2026-02-17 14:27:36 +0000 UTC" firstStartedPulling="2026-02-17 14:27:40.031338307 +0000 UTC m=+1286.374266576" lastFinishedPulling="2026-02-17 14:28:37.974512566 +0000 UTC m=+1344.317440835" observedRunningTime="2026-02-17 14:28:39.416662447 +0000 UTC m=+1345.759590716" watchObservedRunningTime="2026-02-17 14:28:39.429945457 +0000 UTC m=+1345.772873726" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.147527 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.288858 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-scripts\") pod \"8185c649-f1ad-4230-830d-07d002e5b358\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.288951 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-db-sync-config-data\") pod \"8185c649-f1ad-4230-830d-07d002e5b358\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.289063 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8185c649-f1ad-4230-830d-07d002e5b358-etc-machine-id\") pod \"8185c649-f1ad-4230-830d-07d002e5b358\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.289142 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-combined-ca-bundle\") pod \"8185c649-f1ad-4230-830d-07d002e5b358\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.289218 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftmbq\" (UniqueName: \"kubernetes.io/projected/8185c649-f1ad-4230-830d-07d002e5b358-kube-api-access-ftmbq\") pod \"8185c649-f1ad-4230-830d-07d002e5b358\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.289441 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-config-data\") pod \"8185c649-f1ad-4230-830d-07d002e5b358\" (UID: \"8185c649-f1ad-4230-830d-07d002e5b358\") " Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.289519 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8185c649-f1ad-4230-830d-07d002e5b358-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8185c649-f1ad-4230-830d-07d002e5b358" (UID: "8185c649-f1ad-4230-830d-07d002e5b358"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.291774 4836 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8185c649-f1ad-4230-830d-07d002e5b358-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.312705 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-scripts" (OuterVolumeSpecName: "scripts") pod "8185c649-f1ad-4230-830d-07d002e5b358" (UID: "8185c649-f1ad-4230-830d-07d002e5b358"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.325571 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8185c649-f1ad-4230-830d-07d002e5b358-kube-api-access-ftmbq" (OuterVolumeSpecName: "kube-api-access-ftmbq") pod "8185c649-f1ad-4230-830d-07d002e5b358" (UID: "8185c649-f1ad-4230-830d-07d002e5b358"). InnerVolumeSpecName "kube-api-access-ftmbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.325697 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8185c649-f1ad-4230-830d-07d002e5b358" (UID: "8185c649-f1ad-4230-830d-07d002e5b358"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.374522 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8185c649-f1ad-4230-830d-07d002e5b358" (UID: "8185c649-f1ad-4230-830d-07d002e5b358"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.396587 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.396650 4836 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.396666 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.396678 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftmbq\" (UniqueName: \"kubernetes.io/projected/8185c649-f1ad-4230-830d-07d002e5b358-kube-api-access-ftmbq\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.424514 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-config-data" (OuterVolumeSpecName: "config-data") pod "8185c649-f1ad-4230-830d-07d002e5b358" (UID: "8185c649-f1ad-4230-830d-07d002e5b358"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.442413 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqwhc" event={"ID":"8185c649-f1ad-4230-830d-07d002e5b358","Type":"ContainerDied","Data":"b3482ed7c18ae58a71068d39ec0f731b2f5c23d1bee2fd95e9d280383de59ee3"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.442464 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3482ed7c18ae58a71068d39ec0f731b2f5c23d1bee2fd95e9d280383de59ee3" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.442490 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqwhc" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.455198 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-649d9995c8-rcxvp" event={"ID":"ed247b9d-af54-401e-80a3-82d18772f29d","Type":"ContainerStarted","Data":"0f637c77116f7f89955d3abdef322406d505cefd736b74cd4ec4ac6a045c16f5"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.455256 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-649d9995c8-rcxvp" event={"ID":"ed247b9d-af54-401e-80a3-82d18772f29d","Type":"ContainerStarted","Data":"1cbfdb3ab6153a9ef06f48a83598052cef7a8d6eccc057f49dbfbf1a10abee8c"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.455364 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.472178 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-78c4d587b5-cqhdl" event={"ID":"f2f9acba-3f54-43b6-9461-31cba0cc954b","Type":"ContainerStarted","Data":"257eab0890393e6caf8d16464988774c8d840eb8ba2300d1cba4536e48d1671d"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.479500 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.529476 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8185c649-f1ad-4230-830d-07d002e5b358-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.547528 4836 generic.go:334] "Generic (PLEG): container finished" podID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" containerID="42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4" exitCode=0 Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.555692 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" event={"ID":"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c","Type":"ContainerStarted","Data":"c922d67f76cc408680821ed49972a5114ab647a61b7cd84843e32accf4f0fc8b"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.555803 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.555822 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" event={"ID":"62b902ba-6ba2-48f3-a6dc-652fd1d6d58c","Type":"ContainerStarted","Data":"5ba46c4e4411bd77b1697c5688742a0dcb01cc6b5035e314a1b65cd71fd750dd"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.555834 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.555856 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" event={"ID":"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0","Type":"ContainerDied","Data":"42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.560461 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-649d9995c8-rcxvp" podStartSLOduration=9.560437445 podStartE2EDuration="9.560437445s" podCreationTimestamp="2026-02-17 14:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:40.526480722 +0000 UTC m=+1346.869409011" watchObservedRunningTime="2026-02-17 14:28:40.560437445 +0000 UTC m=+1346.903365714" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.578870 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-78c4d587b5-cqhdl" podStartSLOduration=10.578846645 podStartE2EDuration="10.578846645s" podCreationTimestamp="2026-02-17 14:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:40.557029462 +0000 UTC m=+1346.899957741" watchObservedRunningTime="2026-02-17 14:28:40.578846645 +0000 UTC m=+1346.921774904" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.613953 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc958ddf6-kh2rq" event={"ID":"42c3b1e3-728a-4bd8-9669-bfe1656b6de2","Type":"ContainerStarted","Data":"74874321dc0f21c297655ffb8e49b9e5f17f6048c138f35b62d786cdb8a831aa"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.614045 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.614065 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.636814 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" podStartSLOduration=6.636748799 podStartE2EDuration="6.636748799s" podCreationTimestamp="2026-02-17 14:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:40.61101809 +0000 UTC m=+1346.953946369" watchObservedRunningTime="2026-02-17 14:28:40.636748799 +0000 UTC m=+1346.979677088" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.648633 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d7557768-wvvpt" event={"ID":"21c73844-3235-4a12-9f77-901ba8614e11","Type":"ContainerStarted","Data":"f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.648705 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d7557768-wvvpt" event={"ID":"21c73844-3235-4a12-9f77-901ba8614e11","Type":"ContainerStarted","Data":"d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9"} Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.649222 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.649246 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.692489 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:28:40 crc kubenswrapper[4836]: E0217 14:28:40.693068 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8185c649-f1ad-4230-830d-07d002e5b358" containerName="cinder-db-sync" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.693084 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="8185c649-f1ad-4230-830d-07d002e5b358" containerName="cinder-db-sync" Feb 17 14:28:40 crc kubenswrapper[4836]: E0217 14:28:40.693096 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerName="init" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.693102 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerName="init" Feb 17 14:28:40 crc kubenswrapper[4836]: E0217 14:28:40.693108 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerName="dnsmasq-dns" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.693114 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerName="dnsmasq-dns" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.693387 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="8185c649-f1ad-4230-830d-07d002e5b358" containerName="cinder-db-sync" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.693398 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccce7d80-ec87-4fb2-a75f-1b5ddc2f4be9" containerName="dnsmasq-dns" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.694668 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.703957 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.705801 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cg95t" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.705867 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.705891 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.783452 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.832057 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55d7557768-wvvpt" podStartSLOduration=10.832026249 podStartE2EDuration="10.832026249s" podCreationTimestamp="2026-02-17 14:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:40.729431899 +0000 UTC m=+1347.072360188" watchObservedRunningTime="2026-02-17 14:28:40.832026249 +0000 UTC m=+1347.174954518" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.838898 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-scripts\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.838993 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.839087 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.839199 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d3a41-bec9-4783-a2c6-2e6627156cce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.839493 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.839647 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwnw\" (UniqueName: \"kubernetes.io/projected/f85d3a41-bec9-4783-a2c6-2e6627156cce-kube-api-access-mcwnw\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.915533 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bc958ddf6-kh2rq" podStartSLOduration=10.915499708 podStartE2EDuration="10.915499708s" podCreationTimestamp="2026-02-17 14:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:40.782925574 +0000 UTC m=+1347.125853843" watchObservedRunningTime="2026-02-17 14:28:40.915499708 +0000 UTC m=+1347.258427967" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.944350 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.944431 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcwnw\" (UniqueName: \"kubernetes.io/projected/f85d3a41-bec9-4783-a2c6-2e6627156cce-kube-api-access-mcwnw\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.944484 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-scripts\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.944507 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.944551 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.944597 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d3a41-bec9-4783-a2c6-2e6627156cce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:40 crc kubenswrapper[4836]: I0217 14:28:40.944721 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d3a41-bec9-4783-a2c6-2e6627156cce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.063112 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-m9ds8"] Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.086433 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nvkvs"] Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.088982 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.111448 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-scripts\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.111934 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.116331 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nvkvs"] Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.138163 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcwnw\" (UniqueName: \"kubernetes.io/projected/f85d3a41-bec9-4783-a2c6-2e6627156cce-kube-api-access-mcwnw\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.138616 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.142681 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data\") pod \"cinder-scheduler-0\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " pod="openstack/cinder-scheduler-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.149507 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-config\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.154433 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvggg\" (UniqueName: \"kubernetes.io/projected/79b71acb-6b55-4f99-8b13-0c5aea065cbb-kube-api-access-zvggg\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.154600 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.154938 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.155164 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.155250 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.164504 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.167042 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.186038 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.186786 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258533 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258606 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258659 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68fadcf3-845e-4605-add5-6b5b6092e443-logs\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258692 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data-custom\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258780 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtlp\" (UniqueName: \"kubernetes.io/projected/68fadcf3-845e-4605-add5-6b5b6092e443-kube-api-access-wrtlp\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258812 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-config\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258860 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvggg\" (UniqueName: \"kubernetes.io/projected/79b71acb-6b55-4f99-8b13-0c5aea065cbb-kube-api-access-zvggg\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258877 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-scripts\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258920 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.258951 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.259994 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.260056 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-config\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.261039 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.261128 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.261251 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.261323 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68fadcf3-845e-4605-add5-6b5b6092e443-etc-machine-id\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.261362 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.262036 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.305373 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvggg\" (UniqueName: \"kubernetes.io/projected/79b71acb-6b55-4f99-8b13-0c5aea065cbb-kube-api-access-zvggg\") pod \"dnsmasq-dns-5c9776ccc5-nvkvs\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.345629 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.363394 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrtlp\" (UniqueName: \"kubernetes.io/projected/68fadcf3-845e-4605-add5-6b5b6092e443-kube-api-access-wrtlp\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.363519 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-scripts\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.363573 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.363631 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68fadcf3-845e-4605-add5-6b5b6092e443-etc-machine-id\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.363658 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.363737 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68fadcf3-845e-4605-add5-6b5b6092e443-logs\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.363771 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data-custom\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.366268 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68fadcf3-845e-4605-add5-6b5b6092e443-etc-machine-id\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.371927 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68fadcf3-845e-4605-add5-6b5b6092e443-logs\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.375965 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data-custom\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.382334 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.383121 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.392010 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-scripts\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.402535 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrtlp\" (UniqueName: \"kubernetes.io/projected/68fadcf3-845e-4605-add5-6b5b6092e443-kube-api-access-wrtlp\") pod \"cinder-api-0\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.559405 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.601779 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.673422 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc958ddf6-kh2rq" event={"ID":"42c3b1e3-728a-4bd8-9669-bfe1656b6de2","Type":"ContainerStarted","Data":"cd82728e9e642b4fdcdb3da0600fffa9b73fbd7c12deb985a9bb63d9550eb3b7"} Feb 17 14:28:41 crc kubenswrapper[4836]: I0217 14:28:41.675017 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.418685 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.680406 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.751282 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" event={"ID":"f79d706e-2d22-49c6-acb5-dc3f130ab102","Type":"ContainerStarted","Data":"5d611f254cd8b26e577a0770e4344757470d1efb9c2b80eb7d014f808d33145d"} Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.761772 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6567fb9c77-xcq7p" event={"ID":"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd","Type":"ContainerStarted","Data":"cada7ae52988ba210d8ab4a87f2fde9c64110427784b4893df0918b6e1e9743f"} Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.768254 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" event={"ID":"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0","Type":"ContainerStarted","Data":"285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8"} Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.768529 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" podUID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" containerName="dnsmasq-dns" containerID="cri-o://285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8" gracePeriod=10 Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.768659 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.806262 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" podStartSLOduration=12.806229064 podStartE2EDuration="12.806229064s" podCreationTimestamp="2026-02-17 14:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:43.792873601 +0000 UTC m=+1350.135801880" watchObservedRunningTime="2026-02-17 14:28:43.806229064 +0000 UTC m=+1350.149157333" Feb 17 14:28:43 crc kubenswrapper[4836]: I0217 14:28:43.844019 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:28:43 crc kubenswrapper[4836]: W0217 14:28:43.864530 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d3a41_bec9_4783_a2c6_2e6627156cce.slice/crio-d7078c4a99b5b82f758808cb22901fe00a2b1684715c3195cb32e503ee688469 WatchSource:0}: Error finding container d7078c4a99b5b82f758808cb22901fe00a2b1684715c3195cb32e503ee688469: Status 404 returned error can't find the container with id d7078c4a99b5b82f758808cb22901fe00a2b1684715c3195cb32e503ee688469 Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.039586 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nvkvs"] Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.454558 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.563574 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.608743 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spw5p\" (UniqueName: \"kubernetes.io/projected/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-kube-api-access-spw5p\") pod \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.608859 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-nb\") pod \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.608905 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-sb\") pod \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.608935 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-swift-storage-0\") pod \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.609110 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-config\") pod \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.609209 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-svc\") pod \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\" (UID: \"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0\") " Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.646431 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-kube-api-access-spw5p" (OuterVolumeSpecName: "kube-api-access-spw5p") pod "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" (UID: "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0"). InnerVolumeSpecName "kube-api-access-spw5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.713354 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spw5p\" (UniqueName: \"kubernetes.io/projected/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-kube-api-access-spw5p\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.795702 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" (UID: "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.824480 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" (UID: "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.833004 4836 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.833120 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.892712 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" (UID: "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.899691 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" event={"ID":"f79d706e-2d22-49c6-acb5-dc3f130ab102","Type":"ContainerStarted","Data":"c5627163cba14a6237b268dc4e67eff4bc9b5ac5e211c616969c2bb5f1c1dfe4"} Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.945389 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.951878 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6567fb9c77-xcq7p" event={"ID":"bf33e52a-365f-4ccc-8352-f4c7f8e2aebd","Type":"ContainerStarted","Data":"cc1332951352adff0e97b1b3dcac0fe2080c98fdd0a7ef00edae5a858db9e20d"} Feb 17 14:28:44 crc kubenswrapper[4836]: I0217 14:28:44.961404 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-config" (OuterVolumeSpecName: "config") pod "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" (UID: "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:44.999570 4836 generic.go:334] "Generic (PLEG): container finished" podID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" containerID="285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8" exitCode=0 Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:44.999690 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" event={"ID":"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0","Type":"ContainerDied","Data":"285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8"} Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:44.999728 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" event={"ID":"d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0","Type":"ContainerDied","Data":"769356b8227ec8df8f5b18dca0f8472d3df22108be6b842885971987f0b77c6e"} Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:44.999751 4836 scope.go:117] "RemoveContainer" containerID="285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:44.999949 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-m9ds8" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.019215 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" event={"ID":"79b71acb-6b55-4f99-8b13-0c5aea065cbb","Type":"ContainerStarted","Data":"1fe3b4e682953cc1e2a6a78ba19a4ca238a5effc3b1823c6d9c0ce3876e226a4"} Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.030353 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f85d3a41-bec9-4783-a2c6-2e6627156cce","Type":"ContainerStarted","Data":"d7078c4a99b5b82f758808cb22901fe00a2b1684715c3195cb32e503ee688469"} Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.042216 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"68fadcf3-845e-4605-add5-6b5b6092e443","Type":"ContainerStarted","Data":"f48faf4eb5b80c2380cc0bd40d17893777d9a019452082ae03e6f15d493c9099"} Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.051832 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.064740 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bc789578f-mcrrx"] Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.065095 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bc789578f-mcrrx" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-api" containerID="cri-o://ef329f1c472e28115c477d0f824ce0452609f500341f5fe161170bb1b7dd1f36" gracePeriod=30 Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.066081 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bc789578f-mcrrx" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-httpd" containerID="cri-o://29474b05f933bb7261368e691fe6f6124baae6cdcaac7f0997ad485f3fcff20d" gracePeriod=30 Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.096341 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fc4994bf7-cqhhj"] Feb 17 14:28:45 crc kubenswrapper[4836]: E0217 14:28:45.096841 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" containerName="dnsmasq-dns" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.096857 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" containerName="dnsmasq-dns" Feb 17 14:28:45 crc kubenswrapper[4836]: E0217 14:28:45.096902 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" containerName="init" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.096910 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" containerName="init" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.097134 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" containerName="dnsmasq-dns" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.100451 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.111574 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" (UID: "d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.125461 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.155261 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.159723 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fc4994bf7-cqhhj"] Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.226109 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-68fd77ffbb-m5r5c" podStartSLOduration=10.421944468 podStartE2EDuration="14.226078448s" podCreationTimestamp="2026-02-17 14:28:31 +0000 UTC" firstStartedPulling="2026-02-17 14:28:39.24281577 +0000 UTC m=+1345.585744039" lastFinishedPulling="2026-02-17 14:28:43.04694975 +0000 UTC m=+1349.389878019" observedRunningTime="2026-02-17 14:28:44.946214659 +0000 UTC m=+1351.289142958" watchObservedRunningTime="2026-02-17 14:28:45.226078448 +0000 UTC m=+1351.569006727" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.250822 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6567fb9c77-xcq7p" podStartSLOduration=10.405541462 podStartE2EDuration="14.25078654s" podCreationTimestamp="2026-02-17 14:28:31 +0000 UTC" firstStartedPulling="2026-02-17 14:28:39.24245248 +0000 UTC m=+1345.585380749" lastFinishedPulling="2026-02-17 14:28:43.087697558 +0000 UTC m=+1349.430625827" observedRunningTime="2026-02-17 14:28:45.017015684 +0000 UTC m=+1351.359943963" watchObservedRunningTime="2026-02-17 14:28:45.25078654 +0000 UTC m=+1351.593714819" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.257214 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-config\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.258875 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-ovndb-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.258984 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8597g\" (UniqueName: \"kubernetes.io/projected/88848d0f-5d90-4ca0-9a78-d08e73159601-kube-api-access-8597g\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.259080 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-combined-ca-bundle\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.259196 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-httpd-config\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.259347 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-public-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.259519 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-internal-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.283969 4836 scope.go:117] "RemoveContainer" containerID="42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.362137 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8597g\" (UniqueName: \"kubernetes.io/projected/88848d0f-5d90-4ca0-9a78-d08e73159601-kube-api-access-8597g\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.362966 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-combined-ca-bundle\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.364029 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-httpd-config\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.364394 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-public-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.364676 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-internal-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.364865 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-config\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.365005 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-ovndb-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.369684 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-internal-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.371124 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-combined-ca-bundle\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.371987 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-ovndb-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.377549 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-public-tls-certs\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.378810 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-httpd-config\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.382895 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/88848d0f-5d90-4ca0-9a78-d08e73159601-config\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.383669 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8597g\" (UniqueName: \"kubernetes.io/projected/88848d0f-5d90-4ca0-9a78-d08e73159601-kube-api-access-8597g\") pod \"neutron-6fc4994bf7-cqhhj\" (UID: \"88848d0f-5d90-4ca0-9a78-d08e73159601\") " pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.546677 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-m9ds8"] Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.564694 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.588083 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-m9ds8"] Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.671858 4836 scope.go:117] "RemoveContainer" containerID="285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8" Feb 17 14:28:45 crc kubenswrapper[4836]: E0217 14:28:45.674270 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8\": container with ID starting with 285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8 not found: ID does not exist" containerID="285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.674356 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8"} err="failed to get container status \"285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8\": rpc error: code = NotFound desc = could not find container \"285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8\": container with ID starting with 285379d1eba3979dcb5d94df1c00afbe970d0914d5d9084d64761ce8e89152e8 not found: ID does not exist" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.674414 4836 scope.go:117] "RemoveContainer" containerID="42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4" Feb 17 14:28:45 crc kubenswrapper[4836]: E0217 14:28:45.675128 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4\": container with ID starting with 42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4 not found: ID does not exist" containerID="42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4" Feb 17 14:28:45 crc kubenswrapper[4836]: I0217 14:28:45.675169 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4"} err="failed to get container status \"42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4\": rpc error: code = NotFound desc = could not find container \"42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4\": container with ID starting with 42d2d98237842a647a52d62a45a7a321647f264ef6fe4dba4d7263c62fd90bc4 not found: ID does not exist" Feb 17 14:28:46 crc kubenswrapper[4836]: I0217 14:28:46.103992 4836 generic.go:334] "Generic (PLEG): container finished" podID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerID="29474b05f933bb7261368e691fe6f6124baae6cdcaac7f0997ad485f3fcff20d" exitCode=0 Feb 17 14:28:46 crc kubenswrapper[4836]: I0217 14:28:46.104460 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc789578f-mcrrx" event={"ID":"a7dc98d2-302d-4633-8123-fe76bb7dbd40","Type":"ContainerDied","Data":"29474b05f933bb7261368e691fe6f6124baae6cdcaac7f0997ad485f3fcff20d"} Feb 17 14:28:46 crc kubenswrapper[4836]: I0217 14:28:46.108130 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"68fadcf3-845e-4605-add5-6b5b6092e443","Type":"ContainerStarted","Data":"1faaca327f5f255f0110639e69475e673f9cf85d2a5f0e368b4e02002323c46e"} Feb 17 14:28:46 crc kubenswrapper[4836]: I0217 14:28:46.165725 4836 generic.go:334] "Generic (PLEG): container finished" podID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" containerID="277bd33eae834b988e7c295c653ee707631d0efdc5453cfacb6a97be01ceb016" exitCode=0 Feb 17 14:28:46 crc kubenswrapper[4836]: I0217 14:28:46.166145 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" event={"ID":"79b71acb-6b55-4f99-8b13-0c5aea065cbb","Type":"ContainerDied","Data":"277bd33eae834b988e7c295c653ee707631d0efdc5453cfacb6a97be01ceb016"} Feb 17 14:28:46 crc kubenswrapper[4836]: I0217 14:28:46.413217 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fc4994bf7-cqhhj"] Feb 17 14:28:46 crc kubenswrapper[4836]: W0217 14:28:46.497228 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88848d0f_5d90_4ca0_9a78_d08e73159601.slice/crio-05963d65db74fb793e8950376bfdf51fdcf588dc4e9e04371a2fbbc4a0e16f14 WatchSource:0}: Error finding container 05963d65db74fb793e8950376bfdf51fdcf588dc4e9e04371a2fbbc4a0e16f14: Status 404 returned error can't find the container with id 05963d65db74fb793e8950376bfdf51fdcf588dc4e9e04371a2fbbc4a0e16f14 Feb 17 14:28:46 crc kubenswrapper[4836]: I0217 14:28:46.591412 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0" path="/var/lib/kubelet/pods/d2472d16-7ff3-4f1c-a9b8-045dd8ffa6d0/volumes" Feb 17 14:28:46 crc kubenswrapper[4836]: I0217 14:28:46.812905 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-bc789578f-mcrrx" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9696/\": dial tcp 10.217.0.170:9696: connect: connection refused" Feb 17 14:28:47 crc kubenswrapper[4836]: I0217 14:28:47.263451 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc4994bf7-cqhhj" event={"ID":"88848d0f-5d90-4ca0-9a78-d08e73159601","Type":"ContainerStarted","Data":"05963d65db74fb793e8950376bfdf51fdcf588dc4e9e04371a2fbbc4a0e16f14"} Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.319589 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" event={"ID":"79b71acb-6b55-4f99-8b13-0c5aea065cbb","Type":"ContainerStarted","Data":"116ce92f31628ecf8d5384bc487f6288540b5a8b08da5572838c3c49083bb344"} Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.321681 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.342531 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f85d3a41-bec9-4783-a2c6-2e6627156cce","Type":"ContainerStarted","Data":"713b2d67a8bcdbac716e75c48ae87c2e03fb9564a7bca1ee0f598db4994298c7"} Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.345048 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"68fadcf3-845e-4605-add5-6b5b6092e443","Type":"ContainerStarted","Data":"46227394029fb44dba4a63e6488a9dfab06641f1dffd60fc1769e1f7240d858f"} Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.345270 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" containerName="cinder-api-log" containerID="cri-o://1faaca327f5f255f0110639e69475e673f9cf85d2a5f0e368b4e02002323c46e" gracePeriod=30 Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.345675 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.345716 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" containerName="cinder-api" containerID="cri-o://46227394029fb44dba4a63e6488a9dfab06641f1dffd60fc1769e1f7240d858f" gracePeriod=30 Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.363453 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.363619 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.366470 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" podStartSLOduration=8.366443561 podStartE2EDuration="8.366443561s" podCreationTimestamp="2026-02-17 14:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:48.352634376 +0000 UTC m=+1354.695562655" watchObservedRunningTime="2026-02-17 14:28:48.366443561 +0000 UTC m=+1354.709371830" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.381601 4836 generic.go:334] "Generic (PLEG): container finished" podID="4e016162-2025-44ad-989d-ce71d9f8f9bf" containerID="fc7f81c47e20cce7a74c227545b963bd61d6dadbccf7dacfaa97a9b912354775" exitCode=0 Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.381734 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-pvljf" event={"ID":"4e016162-2025-44ad-989d-ce71d9f8f9bf","Type":"ContainerDied","Data":"fc7f81c47e20cce7a74c227545b963bd61d6dadbccf7dacfaa97a9b912354775"} Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.398377 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.398349279 podStartE2EDuration="7.398349279s" podCreationTimestamp="2026-02-17 14:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:48.38551849 +0000 UTC m=+1354.728446789" watchObservedRunningTime="2026-02-17 14:28:48.398349279 +0000 UTC m=+1354.741277558" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.412569 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc4994bf7-cqhhj" event={"ID":"88848d0f-5d90-4ca0-9a78-d08e73159601","Type":"ContainerStarted","Data":"4f76baeee98c161173add9a106f2846a4169503adf035510e6744fb02106f608"} Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.414691 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.459650 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.460068 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.485661 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.497099 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.501992 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fc4994bf7-cqhhj" podStartSLOduration=4.501961036 podStartE2EDuration="4.501961036s" podCreationTimestamp="2026-02-17 14:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:28:48.45245549 +0000 UTC m=+1354.795383789" watchObservedRunningTime="2026-02-17 14:28:48.501961036 +0000 UTC m=+1354.844889325" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.548726 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.735370 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.747527 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.761073 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.904574 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-649d9995c8-rcxvp"] Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.905409 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" containerID="cri-o://1cbfdb3ab6153a9ef06f48a83598052cef7a8d6eccc057f49dbfbf1a10abee8c" gracePeriod=30 Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.906393 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" containerID="cri-o://0f637c77116f7f89955d3abdef322406d505cefd736b74cd4ec4ac6a045c16f5" gracePeriod=30 Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.923735 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.924444 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.924612 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 17 14:28:48 crc kubenswrapper[4836]: I0217 14:28:48.924737 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.443603 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc4994bf7-cqhhj" event={"ID":"88848d0f-5d90-4ca0-9a78-d08e73159601","Type":"ContainerStarted","Data":"7cede19c06071d792fd48dc22325538e54e536266b5f02989155c0035ae46d82"} Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.475180 4836 generic.go:334] "Generic (PLEG): container finished" podID="ed247b9d-af54-401e-80a3-82d18772f29d" containerID="1cbfdb3ab6153a9ef06f48a83598052cef7a8d6eccc057f49dbfbf1a10abee8c" exitCode=143 Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.475316 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-649d9995c8-rcxvp" event={"ID":"ed247b9d-af54-401e-80a3-82d18772f29d","Type":"ContainerDied","Data":"1cbfdb3ab6153a9ef06f48a83598052cef7a8d6eccc057f49dbfbf1a10abee8c"} Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.495632 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f85d3a41-bec9-4783-a2c6-2e6627156cce","Type":"ContainerStarted","Data":"b853d8aee890fe988c231f41ba4f29bef7974b65a59ee2897863ebef85aca6e3"} Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.534125 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.666934042 podStartE2EDuration="9.534098919s" podCreationTimestamp="2026-02-17 14:28:40 +0000 UTC" firstStartedPulling="2026-02-17 14:28:43.882463596 +0000 UTC m=+1350.225391865" lastFinishedPulling="2026-02-17 14:28:45.749628473 +0000 UTC m=+1352.092556742" observedRunningTime="2026-02-17 14:28:49.533372468 +0000 UTC m=+1355.876300737" watchObservedRunningTime="2026-02-17 14:28:49.534098919 +0000 UTC m=+1355.877027198" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.549194 4836 generic.go:334] "Generic (PLEG): container finished" podID="68fadcf3-845e-4605-add5-6b5b6092e443" containerID="46227394029fb44dba4a63e6488a9dfab06641f1dffd60fc1769e1f7240d858f" exitCode=0 Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.549668 4836 generic.go:334] "Generic (PLEG): container finished" podID="68fadcf3-845e-4605-add5-6b5b6092e443" containerID="1faaca327f5f255f0110639e69475e673f9cf85d2a5f0e368b4e02002323c46e" exitCode=143 Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.549416 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"68fadcf3-845e-4605-add5-6b5b6092e443","Type":"ContainerDied","Data":"46227394029fb44dba4a63e6488a9dfab06641f1dffd60fc1769e1f7240d858f"} Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.552049 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.552147 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"68fadcf3-845e-4605-add5-6b5b6092e443","Type":"ContainerDied","Data":"1faaca327f5f255f0110639e69475e673f9cf85d2a5f0e368b4e02002323c46e"} Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.554922 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.554965 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.554982 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.830759 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.909733 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-scripts\") pod \"68fadcf3-845e-4605-add5-6b5b6092e443\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.910245 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrtlp\" (UniqueName: \"kubernetes.io/projected/68fadcf3-845e-4605-add5-6b5b6092e443-kube-api-access-wrtlp\") pod \"68fadcf3-845e-4605-add5-6b5b6092e443\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.910408 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68fadcf3-845e-4605-add5-6b5b6092e443-logs\") pod \"68fadcf3-845e-4605-add5-6b5b6092e443\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.910455 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data-custom\") pod \"68fadcf3-845e-4605-add5-6b5b6092e443\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.910647 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68fadcf3-845e-4605-add5-6b5b6092e443-etc-machine-id\") pod \"68fadcf3-845e-4605-add5-6b5b6092e443\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.910691 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data\") pod \"68fadcf3-845e-4605-add5-6b5b6092e443\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.910748 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-combined-ca-bundle\") pod \"68fadcf3-845e-4605-add5-6b5b6092e443\" (UID: \"68fadcf3-845e-4605-add5-6b5b6092e443\") " Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.911249 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68fadcf3-845e-4605-add5-6b5b6092e443-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "68fadcf3-845e-4605-add5-6b5b6092e443" (UID: "68fadcf3-845e-4605-add5-6b5b6092e443"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.912610 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68fadcf3-845e-4605-add5-6b5b6092e443-logs" (OuterVolumeSpecName: "logs") pod "68fadcf3-845e-4605-add5-6b5b6092e443" (UID: "68fadcf3-845e-4605-add5-6b5b6092e443"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.922164 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68fadcf3-845e-4605-add5-6b5b6092e443-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.922212 4836 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68fadcf3-845e-4605-add5-6b5b6092e443-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.948721 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-scripts" (OuterVolumeSpecName: "scripts") pod "68fadcf3-845e-4605-add5-6b5b6092e443" (UID: "68fadcf3-845e-4605-add5-6b5b6092e443"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.949156 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68fadcf3-845e-4605-add5-6b5b6092e443-kube-api-access-wrtlp" (OuterVolumeSpecName: "kube-api-access-wrtlp") pod "68fadcf3-845e-4605-add5-6b5b6092e443" (UID: "68fadcf3-845e-4605-add5-6b5b6092e443"). InnerVolumeSpecName "kube-api-access-wrtlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:49 crc kubenswrapper[4836]: I0217 14:28:49.949266 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "68fadcf3-845e-4605-add5-6b5b6092e443" (UID: "68fadcf3-845e-4605-add5-6b5b6092e443"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.010348 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68fadcf3-845e-4605-add5-6b5b6092e443" (UID: "68fadcf3-845e-4605-add5-6b5b6092e443"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.010521 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data" (OuterVolumeSpecName: "config-data") pod "68fadcf3-845e-4605-add5-6b5b6092e443" (UID: "68fadcf3-845e-4605-add5-6b5b6092e443"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.024932 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.024986 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.024998 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.025007 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrtlp\" (UniqueName: \"kubernetes.io/projected/68fadcf3-845e-4605-add5-6b5b6092e443-kube-api-access-wrtlp\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.025017 4836 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68fadcf3-845e-4605-add5-6b5b6092e443-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.472269 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.527847 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-combined-ca-bundle\") pod \"4e016162-2025-44ad-989d-ce71d9f8f9bf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.528007 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-scripts\") pod \"4e016162-2025-44ad-989d-ce71d9f8f9bf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.528482 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-certs\") pod \"4e016162-2025-44ad-989d-ce71d9f8f9bf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.528578 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-config-data\") pod \"4e016162-2025-44ad-989d-ce71d9f8f9bf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.528702 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfrn2\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-kube-api-access-hfrn2\") pod \"4e016162-2025-44ad-989d-ce71d9f8f9bf\" (UID: \"4e016162-2025-44ad-989d-ce71d9f8f9bf\") " Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.555609 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-scripts" (OuterVolumeSpecName: "scripts") pod "4e016162-2025-44ad-989d-ce71d9f8f9bf" (UID: "4e016162-2025-44ad-989d-ce71d9f8f9bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.557606 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-certs" (OuterVolumeSpecName: "certs") pod "4e016162-2025-44ad-989d-ce71d9f8f9bf" (UID: "4e016162-2025-44ad-989d-ce71d9f8f9bf"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.558711 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-kube-api-access-hfrn2" (OuterVolumeSpecName: "kube-api-access-hfrn2") pod "4e016162-2025-44ad-989d-ce71d9f8f9bf" (UID: "4e016162-2025-44ad-989d-ce71d9f8f9bf"). InnerVolumeSpecName "kube-api-access-hfrn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.600682 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-config-data" (OuterVolumeSpecName: "config-data") pod "4e016162-2025-44ad-989d-ce71d9f8f9bf" (UID: "4e016162-2025-44ad-989d-ce71d9f8f9bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.600740 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e016162-2025-44ad-989d-ce71d9f8f9bf" (UID: "4e016162-2025-44ad-989d-ce71d9f8f9bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.608967 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.626845 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-pvljf" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.632449 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.632491 4836 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.632505 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.632519 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfrn2\" (UniqueName: \"kubernetes.io/projected/4e016162-2025-44ad-989d-ce71d9f8f9bf-kube-api-access-hfrn2\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.632529 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e016162-2025-44ad-989d-ce71d9f8f9bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.669081 4836 generic.go:334] "Generic (PLEG): container finished" podID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerID="ef329f1c472e28115c477d0f824ce0452609f500341f5fe161170bb1b7dd1f36" exitCode=0 Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.676540 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"68fadcf3-845e-4605-add5-6b5b6092e443","Type":"ContainerDied","Data":"f48faf4eb5b80c2380cc0bd40d17893777d9a019452082ae03e6f15d493c9099"} Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.676610 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-pvljf" event={"ID":"4e016162-2025-44ad-989d-ce71d9f8f9bf","Type":"ContainerDied","Data":"5256492605b5f72154c618f9880c205b521d09a7d2c8e835b6a6c8642893045e"} Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.676631 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5256492605b5f72154c618f9880c205b521d09a7d2c8e835b6a6c8642893045e" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.676644 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc789578f-mcrrx" event={"ID":"a7dc98d2-302d-4633-8123-fe76bb7dbd40","Type":"ContainerDied","Data":"ef329f1c472e28115c477d0f824ce0452609f500341f5fe161170bb1b7dd1f36"} Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.676676 4836 scope.go:117] "RemoveContainer" containerID="46227394029fb44dba4a63e6488a9dfab06641f1dffd60fc1769e1f7240d858f" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.767685 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.798857 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.805686 4836 scope.go:117] "RemoveContainer" containerID="1faaca327f5f255f0110639e69475e673f9cf85d2a5f0e368b4e02002323c46e" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.810644 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:50 crc kubenswrapper[4836]: E0217 14:28:50.811922 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" containerName="cinder-api" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.811948 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" containerName="cinder-api" Feb 17 14:28:50 crc kubenswrapper[4836]: E0217 14:28:50.811970 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" containerName="cinder-api-log" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.811977 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" containerName="cinder-api-log" Feb 17 14:28:50 crc kubenswrapper[4836]: E0217 14:28:50.811987 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e016162-2025-44ad-989d-ce71d9f8f9bf" containerName="cloudkitty-db-sync" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.811994 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e016162-2025-44ad-989d-ce71d9f8f9bf" containerName="cloudkitty-db-sync" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.812236 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" containerName="cinder-api-log" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.812258 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" containerName="cinder-api" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.812265 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e016162-2025-44ad-989d-ce71d9f8f9bf" containerName="cloudkitty-db-sync" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.813868 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.819620 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.820724 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.821013 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.822644 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979485 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979565 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979604 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979658 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8722776f-950d-46d6-8929-164cc70747af-logs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979718 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nprdx\" (UniqueName: \"kubernetes.io/projected/8722776f-950d-46d6-8929-164cc70747af-kube-api-access-nprdx\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979760 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-scripts\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979777 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-config-data-custom\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979850 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8722776f-950d-46d6-8929-164cc70747af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:50 crc kubenswrapper[4836]: I0217 14:28:50.979890 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-config-data\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.083937 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nprdx\" (UniqueName: \"kubernetes.io/projected/8722776f-950d-46d6-8929-164cc70747af-kube-api-access-nprdx\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.084536 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-scripts\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.084573 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-config-data-custom\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.084827 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8722776f-950d-46d6-8929-164cc70747af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.084875 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-config-data\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.084912 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.084951 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.084988 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.085060 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8722776f-950d-46d6-8929-164cc70747af-logs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.085752 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8722776f-950d-46d6-8929-164cc70747af-logs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.097361 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-scripts\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.097535 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8722776f-950d-46d6-8929-164cc70747af-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.100605 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-config-data\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.111206 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.117162 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-config-data-custom\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.125445 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nprdx\" (UniqueName: \"kubernetes.io/projected/8722776f-950d-46d6-8929-164cc70747af-kube-api-access-nprdx\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.130328 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.130962 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8722776f-950d-46d6-8929-164cc70747af-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8722776f-950d-46d6-8929-164cc70747af\") " pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.137713 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.159035 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.191600 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-config\") pod \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.191893 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-public-tls-certs\") pod \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.191964 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-internal-tls-certs\") pod \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.192003 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-httpd-config\") pod \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.192160 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-combined-ca-bundle\") pod \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.192242 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv8ws\" (UniqueName: \"kubernetes.io/projected/a7dc98d2-302d-4633-8123-fe76bb7dbd40-kube-api-access-nv8ws\") pod \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.192328 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-ovndb-tls-certs\") pod \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\" (UID: \"a7dc98d2-302d-4633-8123-fe76bb7dbd40\") " Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.209911 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a7dc98d2-302d-4633-8123-fe76bb7dbd40" (UID: "a7dc98d2-302d-4633-8123-fe76bb7dbd40"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.211227 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7dc98d2-302d-4633-8123-fe76bb7dbd40-kube-api-access-nv8ws" (OuterVolumeSpecName: "kube-api-access-nv8ws") pod "a7dc98d2-302d-4633-8123-fe76bb7dbd40" (UID: "a7dc98d2-302d-4633-8123-fe76bb7dbd40"). InnerVolumeSpecName "kube-api-access-nv8ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.297328 4836 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.297373 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv8ws\" (UniqueName: \"kubernetes.io/projected/a7dc98d2-302d-4633-8123-fe76bb7dbd40-kube-api-access-nv8ws\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.346866 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.365719 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7dc98d2-302d-4633-8123-fe76bb7dbd40" (UID: "a7dc98d2-302d-4633-8123-fe76bb7dbd40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.379082 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a7dc98d2-302d-4633-8123-fe76bb7dbd40" (UID: "a7dc98d2-302d-4633-8123-fe76bb7dbd40"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.405019 4836 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.405070 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.420806 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a7dc98d2-302d-4633-8123-fe76bb7dbd40" (UID: "a7dc98d2-302d-4633-8123-fe76bb7dbd40"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.826206 4836 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.841441 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a7dc98d2-302d-4633-8123-fe76bb7dbd40" (UID: "a7dc98d2-302d-4633-8123-fe76bb7dbd40"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.866936 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-config" (OuterVolumeSpecName: "config") pod "a7dc98d2-302d-4633-8123-fe76bb7dbd40" (UID: "a7dc98d2-302d-4633-8123-fe76bb7dbd40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.885458 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc789578f-mcrrx" event={"ID":"a7dc98d2-302d-4633-8123-fe76bb7dbd40","Type":"ContainerDied","Data":"61e1414473aaed7533a1bb0fd531409b1cf0fa9ea0b92c1ed51519923f9cbabf"} Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.885554 4836 scope.go:117] "RemoveContainer" containerID="29474b05f933bb7261368e691fe6f6124baae6cdcaac7f0997ad485f3fcff20d" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.885754 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc789578f-mcrrx" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.925715 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.925748 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.927620 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.927647 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.948576 4836 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:51 crc kubenswrapper[4836]: I0217 14:28:51.953205 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7dc98d2-302d-4633-8123-fe76bb7dbd40-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.032495 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-9z4jp"] Feb 17 14:28:52 crc kubenswrapper[4836]: E0217 14:28:52.033515 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-httpd" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.033536 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-httpd" Feb 17 14:28:52 crc kubenswrapper[4836]: E0217 14:28:52.033582 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-api" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.033592 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-api" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.033813 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-httpd" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.033849 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" containerName="neutron-api" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.034795 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.045698 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.045907 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.046021 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.046138 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.046258 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-l28cf" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.110073 4836 scope.go:117] "RemoveContainer" containerID="ef329f1c472e28115c477d0f824ce0452609f500341f5fe161170bb1b7dd1f36" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.117075 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-9z4jp"] Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.157612 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-certs\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.157759 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-config-data\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.157805 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-combined-ca-bundle\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.157897 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82gk8\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-kube-api-access-82gk8\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.158136 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-scripts\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.179448 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bc789578f-mcrrx"] Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.256397 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bc789578f-mcrrx"] Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.272274 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-scripts\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.272413 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-certs\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.272527 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-config-data\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.272603 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-combined-ca-bundle\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.272693 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82gk8\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-kube-api-access-82gk8\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.308456 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-combined-ca-bundle\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.311904 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-certs\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.322755 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82gk8\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-kube-api-access-82gk8\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.357631 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-scripts\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.369047 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-config-data\") pod \"cloudkitty-storageinit-9z4jp\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.398861 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.453090 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 14:28:52 crc kubenswrapper[4836]: W0217 14:28:52.508528 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8722776f_950d_46d6_8929_164cc70747af.slice/crio-9637dd07ce54453ecac25fa854611f26e2d792f0eece79611b4ba99df3a280fc WatchSource:0}: Error finding container 9637dd07ce54453ecac25fa854611f26e2d792f0eece79611b4ba99df3a280fc: Status 404 returned error can't find the container with id 9637dd07ce54453ecac25fa854611f26e2d792f0eece79611b4ba99df3a280fc Feb 17 14:28:52 crc kubenswrapper[4836]: E0217 14:28:52.608508 4836 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7dc98d2_302d_4633_8123_fe76bb7dbd40.slice/crio-61e1414473aaed7533a1bb0fd531409b1cf0fa9ea0b92c1ed51519923f9cbabf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7dc98d2_302d_4633_8123_fe76bb7dbd40.slice\": RecentStats: unable to find data in memory cache]" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.720885 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68fadcf3-845e-4605-add5-6b5b6092e443" path="/var/lib/kubelet/pods/68fadcf3-845e-4605-add5-6b5b6092e443/volumes" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.724129 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7dc98d2-302d-4633-8123-fe76bb7dbd40" path="/var/lib/kubelet/pods/a7dc98d2-302d-4633-8123-fe76bb7dbd40/volumes" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.729502 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" podUID="62b902ba-6ba2-48f3-a6dc-652fd1d6d58c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.180:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:28:52 crc kubenswrapper[4836]: I0217 14:28:52.949067 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8722776f-950d-46d6-8929-164cc70747af","Type":"ContainerStarted","Data":"9637dd07ce54453ecac25fa854611f26e2d792f0eece79611b4ba99df3a280fc"} Feb 17 14:28:53 crc kubenswrapper[4836]: I0217 14:28:53.257316 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-9z4jp"] Feb 17 14:28:53 crc kubenswrapper[4836]: I0217 14:28:53.766500 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dc9c9fdbb-zxjj6" podUID="62b902ba-6ba2-48f3-a6dc-652fd1d6d58c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.180:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:28:53 crc kubenswrapper[4836]: I0217 14:28:53.970793 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8722776f-950d-46d6-8929-164cc70747af","Type":"ContainerStarted","Data":"52da630d0b5f10dbdeca848132a5aeddff0b17cf2d63ba22d9954c518c687970"} Feb 17 14:28:54 crc kubenswrapper[4836]: I0217 14:28:54.006786 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 14:28:54 crc kubenswrapper[4836]: I0217 14:28:54.007144 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 14:28:54 crc kubenswrapper[4836]: I0217 14:28:54.242328 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 14:28:54 crc kubenswrapper[4836]: I0217 14:28:54.242544 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:28:54 crc kubenswrapper[4836]: I0217 14:28:54.742204 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 14:28:54 crc kubenswrapper[4836]: I0217 14:28:54.870020 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:54 crc kubenswrapper[4836]: I0217 14:28:54.870624 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:28:54 crc kubenswrapper[4836]: I0217 14:28:54.871527 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 14:28:56 crc kubenswrapper[4836]: I0217 14:28:56.338041 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:57282->10.217.0.179:9311: read: connection reset by peer" Feb 17 14:28:56 crc kubenswrapper[4836]: I0217 14:28:56.341111 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:57288->10.217.0.179:9311: read: connection reset by peer" Feb 17 14:28:56 crc kubenswrapper[4836]: I0217 14:28:56.565902 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:28:56 crc kubenswrapper[4836]: I0217 14:28:56.662929 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-knj6m"] Feb 17 14:28:56 crc kubenswrapper[4836]: I0217 14:28:56.663239 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" podUID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerName="dnsmasq-dns" containerID="cri-o://85a21ea6f28662473a5cbe42dfa68ac85c766a6f09753e438f2c37af7356f777" gracePeriod=10 Feb 17 14:28:56 crc kubenswrapper[4836]: I0217 14:28:56.701661 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 14:28:56 crc kubenswrapper[4836]: I0217 14:28:56.775751 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:28:57 crc kubenswrapper[4836]: I0217 14:28:57.013316 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-649d9995c8-rcxvp" event={"ID":"ed247b9d-af54-401e-80a3-82d18772f29d","Type":"ContainerDied","Data":"0f637c77116f7f89955d3abdef322406d505cefd736b74cd4ec4ac6a045c16f5"} Feb 17 14:28:57 crc kubenswrapper[4836]: I0217 14:28:57.013252 4836 generic.go:334] "Generic (PLEG): container finished" podID="ed247b9d-af54-401e-80a3-82d18772f29d" containerID="0f637c77116f7f89955d3abdef322406d505cefd736b74cd4ec4ac6a045c16f5" exitCode=0 Feb 17 14:28:57 crc kubenswrapper[4836]: I0217 14:28:57.017422 4836 generic.go:334] "Generic (PLEG): container finished" podID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerID="85a21ea6f28662473a5cbe42dfa68ac85c766a6f09753e438f2c37af7356f777" exitCode=0 Feb 17 14:28:57 crc kubenswrapper[4836]: I0217 14:28:57.017509 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" event={"ID":"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c","Type":"ContainerDied","Data":"85a21ea6f28662473a5cbe42dfa68ac85c766a6f09753e438f2c37af7356f777"} Feb 17 14:28:57 crc kubenswrapper[4836]: I0217 14:28:57.017798 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerName="cinder-scheduler" containerID="cri-o://713b2d67a8bcdbac716e75c48ae87c2e03fb9564a7bca1ee0f598db4994298c7" gracePeriod=30 Feb 17 14:28:57 crc kubenswrapper[4836]: I0217 14:28:57.017949 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerName="probe" containerID="cri-o://b853d8aee890fe988c231f41ba4f29bef7974b65a59ee2897863ebef85aca6e3" gracePeriod=30 Feb 17 14:28:57 crc kubenswrapper[4836]: I0217 14:28:57.256062 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": dial tcp 10.217.0.179:9311: connect: connection refused" Feb 17 14:28:57 crc kubenswrapper[4836]: I0217 14:28:57.256059 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-649d9995c8-rcxvp" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": dial tcp 10.217.0.179:9311: connect: connection refused" Feb 17 14:28:58 crc kubenswrapper[4836]: I0217 14:28:58.051863 4836 generic.go:334] "Generic (PLEG): container finished" podID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerID="b853d8aee890fe988c231f41ba4f29bef7974b65a59ee2897863ebef85aca6e3" exitCode=0 Feb 17 14:28:58 crc kubenswrapper[4836]: I0217 14:28:58.051979 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f85d3a41-bec9-4783-a2c6-2e6627156cce","Type":"ContainerDied","Data":"b853d8aee890fe988c231f41ba4f29bef7974b65a59ee2897863ebef85aca6e3"} Feb 17 14:28:59 crc kubenswrapper[4836]: I0217 14:28:59.075889 4836 generic.go:334] "Generic (PLEG): container finished" podID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerID="713b2d67a8bcdbac716e75c48ae87c2e03fb9564a7bca1ee0f598db4994298c7" exitCode=0 Feb 17 14:28:59 crc kubenswrapper[4836]: I0217 14:28:59.076392 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f85d3a41-bec9-4783-a2c6-2e6627156cce","Type":"ContainerDied","Data":"713b2d67a8bcdbac716e75c48ae87c2e03fb9564a7bca1ee0f598db4994298c7"} Feb 17 14:28:59 crc kubenswrapper[4836]: I0217 14:28:59.312924 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" podUID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: connect: connection refused" Feb 17 14:28:59 crc kubenswrapper[4836]: W0217 14:28:59.498317 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf38b5f94_bc8b_4e64_abe6_8c39b920cb4b.slice/crio-bcf545502020f0699c5847e3cc9076fff2937319f46acd5c1c65027d98b9be99 WatchSource:0}: Error finding container bcf545502020f0699c5847e3cc9076fff2937319f46acd5c1c65027d98b9be99: Status 404 returned error can't find the container with id bcf545502020f0699c5847e3cc9076fff2937319f46acd5c1c65027d98b9be99 Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.089232 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-9z4jp" event={"ID":"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b","Type":"ContainerStarted","Data":"bcf545502020f0699c5847e3cc9076fff2937319f46acd5c1c65027d98b9be99"} Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.841108 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.854670 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.931568 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-svc\") pod \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.931667 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-sb\") pod \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.931789 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-config\") pod \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.931921 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7h58\" (UniqueName: \"kubernetes.io/projected/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-kube-api-access-f7h58\") pod \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.932031 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-nb\") pod \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " Feb 17 14:29:00 crc kubenswrapper[4836]: I0217 14:29:00.932180 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-swift-storage-0\") pod \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\" (UID: \"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c\") " Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:00.998663 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-kube-api-access-f7h58" (OuterVolumeSpecName: "kube-api-access-f7h58") pod "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" (UID: "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c"). InnerVolumeSpecName "kube-api-access-f7h58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.039758 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed247b9d-af54-401e-80a3-82d18772f29d-logs\") pod \"ed247b9d-af54-401e-80a3-82d18772f29d\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.040102 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data-custom\") pod \"ed247b9d-af54-401e-80a3-82d18772f29d\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.040144 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-combined-ca-bundle\") pod \"ed247b9d-af54-401e-80a3-82d18772f29d\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.040188 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slsjs\" (UniqueName: \"kubernetes.io/projected/ed247b9d-af54-401e-80a3-82d18772f29d-kube-api-access-slsjs\") pod \"ed247b9d-af54-401e-80a3-82d18772f29d\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.040226 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data\") pod \"ed247b9d-af54-401e-80a3-82d18772f29d\" (UID: \"ed247b9d-af54-401e-80a3-82d18772f29d\") " Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.041063 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7h58\" (UniqueName: \"kubernetes.io/projected/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-kube-api-access-f7h58\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.043573 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed247b9d-af54-401e-80a3-82d18772f29d-logs" (OuterVolumeSpecName: "logs") pod "ed247b9d-af54-401e-80a3-82d18772f29d" (UID: "ed247b9d-af54-401e-80a3-82d18772f29d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.095958 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed247b9d-af54-401e-80a3-82d18772f29d" (UID: "ed247b9d-af54-401e-80a3-82d18772f29d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.137981 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed247b9d-af54-401e-80a3-82d18772f29d-kube-api-access-slsjs" (OuterVolumeSpecName: "kube-api-access-slsjs") pod "ed247b9d-af54-401e-80a3-82d18772f29d" (UID: "ed247b9d-af54-401e-80a3-82d18772f29d"). InnerVolumeSpecName "kube-api-access-slsjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.150248 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slsjs\" (UniqueName: \"kubernetes.io/projected/ed247b9d-af54-401e-80a3-82d18772f29d-kube-api-access-slsjs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.150312 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed247b9d-af54-401e-80a3-82d18772f29d-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.150327 4836 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.225698 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" event={"ID":"7622952e-3f9a-4569-8f4d-8a07f1cbcd2c","Type":"ContainerDied","Data":"8555cc4b8a651ad8d38601eead66d8910d6d4cd8c7c50d4ab726898662d8c02f"} Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.225781 4836 scope.go:117] "RemoveContainer" containerID="85a21ea6f28662473a5cbe42dfa68ac85c766a6f09753e438f2c37af7356f777" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.225994 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-knj6m" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.245478 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-649d9995c8-rcxvp" event={"ID":"ed247b9d-af54-401e-80a3-82d18772f29d","Type":"ContainerDied","Data":"569d3d6ec6caf0a5240f5c4b4d890d8ae7a0c22c3878dcb1e8c71559ae9f5a26"} Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.245953 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-649d9995c8-rcxvp" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.319434 4836 scope.go:117] "RemoveContainer" containerID="50d4a249bcc48e57b448052c5a0747dd07cf392d7bd62132728c04243ac9a69b" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.442762 4836 scope.go:117] "RemoveContainer" containerID="0f637c77116f7f89955d3abdef322406d505cefd736b74cd4ec4ac6a045c16f5" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.517206 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed247b9d-af54-401e-80a3-82d18772f29d" (UID: "ed247b9d-af54-401e-80a3-82d18772f29d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.520505 4836 scope.go:117] "RemoveContainer" containerID="1cbfdb3ab6153a9ef06f48a83598052cef7a8d6eccc057f49dbfbf1a10abee8c" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.576176 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.728320 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" (UID: "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.756035 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" (UID: "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.770639 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" (UID: "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.787826 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.787879 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.787899 4836 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.850615 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data" (OuterVolumeSpecName: "config-data") pod "ed247b9d-af54-401e-80a3-82d18772f29d" (UID: "ed247b9d-af54-401e-80a3-82d18772f29d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.851027 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" (UID: "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.881968 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-config" (OuterVolumeSpecName: "config") pod "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" (UID: "7622952e-3f9a-4569-8f4d-8a07f1cbcd2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.890722 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed247b9d-af54-401e-80a3-82d18772f29d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.890788 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:01 crc kubenswrapper[4836]: I0217 14:29:01.890805 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.000462 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.098512 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-combined-ca-bundle\") pod \"f85d3a41-bec9-4783-a2c6-2e6627156cce\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.098563 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data\") pod \"f85d3a41-bec9-4783-a2c6-2e6627156cce\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.098592 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcwnw\" (UniqueName: \"kubernetes.io/projected/f85d3a41-bec9-4783-a2c6-2e6627156cce-kube-api-access-mcwnw\") pod \"f85d3a41-bec9-4783-a2c6-2e6627156cce\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.098637 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-scripts\") pod \"f85d3a41-bec9-4783-a2c6-2e6627156cce\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.098678 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data-custom\") pod \"f85d3a41-bec9-4783-a2c6-2e6627156cce\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.098743 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d3a41-bec9-4783-a2c6-2e6627156cce-etc-machine-id\") pod \"f85d3a41-bec9-4783-a2c6-2e6627156cce\" (UID: \"f85d3a41-bec9-4783-a2c6-2e6627156cce\") " Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.099241 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85d3a41-bec9-4783-a2c6-2e6627156cce-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f85d3a41-bec9-4783-a2c6-2e6627156cce" (UID: "f85d3a41-bec9-4783-a2c6-2e6627156cce"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.108852 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85d3a41-bec9-4783-a2c6-2e6627156cce-kube-api-access-mcwnw" (OuterVolumeSpecName: "kube-api-access-mcwnw") pod "f85d3a41-bec9-4783-a2c6-2e6627156cce" (UID: "f85d3a41-bec9-4783-a2c6-2e6627156cce"). InnerVolumeSpecName "kube-api-access-mcwnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.116885 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f85d3a41-bec9-4783-a2c6-2e6627156cce" (UID: "f85d3a41-bec9-4783-a2c6-2e6627156cce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.149650 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-scripts" (OuterVolumeSpecName: "scripts") pod "f85d3a41-bec9-4783-a2c6-2e6627156cce" (UID: "f85d3a41-bec9-4783-a2c6-2e6627156cce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.201975 4836 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d3a41-bec9-4783-a2c6-2e6627156cce-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.202068 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcwnw\" (UniqueName: \"kubernetes.io/projected/f85d3a41-bec9-4783-a2c6-2e6627156cce-kube-api-access-mcwnw\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.202083 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.202098 4836 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.267437 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-649d9995c8-rcxvp"] Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.290385 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-649d9995c8-rcxvp"] Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.316790 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f85d3a41-bec9-4783-a2c6-2e6627156cce","Type":"ContainerDied","Data":"d7078c4a99b5b82f758808cb22901fe00a2b1684715c3195cb32e503ee688469"} Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.316866 4836 scope.go:117] "RemoveContainer" containerID="b853d8aee890fe988c231f41ba4f29bef7974b65a59ee2897863ebef85aca6e3" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.317029 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.324930 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-knj6m"] Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.345760 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-9z4jp" event={"ID":"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b","Type":"ContainerStarted","Data":"852265bc6ffb6ef9657692f454a84caf832b683e76f800e8dccb3317d95a69ea"} Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.363346 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-knj6m"] Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.370813 4836 scope.go:117] "RemoveContainer" containerID="713b2d67a8bcdbac716e75c48ae87c2e03fb9564a7bca1ee0f598db4994298c7" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.378524 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-9z4jp" podStartSLOduration=11.378493514 podStartE2EDuration="11.378493514s" podCreationTimestamp="2026-02-17 14:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:02.368662179 +0000 UTC m=+1368.711590448" watchObservedRunningTime="2026-02-17 14:29:02.378493514 +0000 UTC m=+1368.721421783" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.635745 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85d3a41-bec9-4783-a2c6-2e6627156cce" (UID: "f85d3a41-bec9-4783-a2c6-2e6627156cce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.647843 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.654763 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" path="/var/lib/kubelet/pods/7622952e-3f9a-4569-8f4d-8a07f1cbcd2c/volumes" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.655620 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" path="/var/lib/kubelet/pods/ed247b9d-af54-401e-80a3-82d18772f29d/volumes" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.733443 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data" (OuterVolumeSpecName: "config-data") pod "f85d3a41-bec9-4783-a2c6-2e6627156cce" (UID: "f85d3a41-bec9-4783-a2c6-2e6627156cce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.750846 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d3a41-bec9-4783-a2c6-2e6627156cce-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:02 crc kubenswrapper[4836]: I0217 14:29:02.986901 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.008495 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.061943 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.066941 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.066983 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:29:03 crc kubenswrapper[4836]: E0217 14:29:03.067536 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067561 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" Feb 17 14:29:03 crc kubenswrapper[4836]: E0217 14:29:03.067578 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerName="init" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067585 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerName="init" Feb 17 14:29:03 crc kubenswrapper[4836]: E0217 14:29:03.067602 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerName="probe" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067609 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerName="probe" Feb 17 14:29:03 crc kubenswrapper[4836]: E0217 14:29:03.067624 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerName="cinder-scheduler" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067631 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerName="cinder-scheduler" Feb 17 14:29:03 crc kubenswrapper[4836]: E0217 14:29:03.067651 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerName="dnsmasq-dns" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067659 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerName="dnsmasq-dns" Feb 17 14:29:03 crc kubenswrapper[4836]: E0217 14:29:03.067668 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067674 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067912 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="7622952e-3f9a-4569-8f4d-8a07f1cbcd2c" containerName="dnsmasq-dns" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067932 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerName="probe" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067956 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api-log" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067970 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed247b9d-af54-401e-80a3-82d18772f29d" containerName="barbican-api" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.067980 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" containerName="cinder-scheduler" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.070239 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.072853 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.109868 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:29:03 crc kubenswrapper[4836]: E0217 14:29:03.197045 4836 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d3a41_bec9_4783_a2c6_2e6627156cce.slice\": RecentStats: unable to find data in memory cache]" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.274561 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.274666 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.274773 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.274859 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e6a7955-6cfb-4afe-b94a-8900513e5821-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.274924 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.274951 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp7vf\" (UniqueName: \"kubernetes.io/projected/0e6a7955-6cfb-4afe-b94a-8900513e5821-kube-api-access-fp7vf\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.376927 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8722776f-950d-46d6-8929-164cc70747af","Type":"ContainerStarted","Data":"c09d4eb94f75fb36fdad30edc1250daa438667981d17bf085edb909265f5f881"} Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.377638 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.378435 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.387558 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.388238 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e6a7955-6cfb-4afe-b94a-8900513e5821-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.388382 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e6a7955-6cfb-4afe-b94a-8900513e5821-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.388869 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.392412 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp7vf\" (UniqueName: \"kubernetes.io/projected/0e6a7955-6cfb-4afe-b94a-8900513e5821-kube-api-access-fp7vf\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.395728 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.396703 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="ceilometer-central-agent" containerID="cri-o://6102d176b1010bbf234d415140cba35d28570c5b514c7edd1c4a0962a14c5149" gracePeriod=30 Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.396840 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="proxy-httpd" containerID="cri-o://00d436156aa07858f79630bc19852984525e2688b6a3d2302eeae168425ab6a8" gracePeriod=30 Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.396898 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="sg-core" containerID="cri-o://b11cf843196ed96ab329470f8fb90c845e937e84667798d3853568520da77e41" gracePeriod=30 Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.396950 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="ceilometer-notification-agent" containerID="cri-o://adc3ef3643d684dbbbf0790a30dd752752d5a28971c3915143c0a6ec314bc365" gracePeriod=30 Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.395875 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerStarted","Data":"00d436156aa07858f79630bc19852984525e2688b6a3d2302eeae168425ab6a8"} Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.403376 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.405466 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.408763 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.427901 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.433083 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp7vf\" (UniqueName: \"kubernetes.io/projected/0e6a7955-6cfb-4afe-b94a-8900513e5821-kube-api-access-fp7vf\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.433114 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e6a7955-6cfb-4afe-b94a-8900513e5821-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e6a7955-6cfb-4afe-b94a-8900513e5821\") " pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.449651 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=13.44962423 podStartE2EDuration="13.44962423s" podCreationTimestamp="2026-02-17 14:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:03.422372095 +0000 UTC m=+1369.765300384" watchObservedRunningTime="2026-02-17 14:29:03.44962423 +0000 UTC m=+1369.792552499" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.534100 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.678916803 podStartE2EDuration="1m26.534070411s" podCreationTimestamp="2026-02-17 14:27:37 +0000 UTC" firstStartedPulling="2026-02-17 14:27:41.033178975 +0000 UTC m=+1287.376107244" lastFinishedPulling="2026-02-17 14:29:00.888332583 +0000 UTC m=+1367.231260852" observedRunningTime="2026-02-17 14:29:03.503332231 +0000 UTC m=+1369.846260520" watchObservedRunningTime="2026-02-17 14:29:03.534070411 +0000 UTC m=+1369.876998690" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.710874 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 14:29:03 crc kubenswrapper[4836]: I0217 14:29:03.920432 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bc958ddf6-kh2rq" Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.019910 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-55d7557768-wvvpt"] Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.447522 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.456773 4836 generic.go:334] "Generic (PLEG): container finished" podID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerID="00d436156aa07858f79630bc19852984525e2688b6a3d2302eeae168425ab6a8" exitCode=0 Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.456822 4836 generic.go:334] "Generic (PLEG): container finished" podID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerID="b11cf843196ed96ab329470f8fb90c845e937e84667798d3853568520da77e41" exitCode=2 Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.458095 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerDied","Data":"00d436156aa07858f79630bc19852984525e2688b6a3d2302eeae168425ab6a8"} Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.458135 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerDied","Data":"b11cf843196ed96ab329470f8fb90c845e937e84667798d3853568520da77e41"} Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.458664 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-55d7557768-wvvpt" podUID="21c73844-3235-4a12-9f77-901ba8614e11" containerName="placement-log" containerID="cri-o://d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9" gracePeriod=30 Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.467908 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-55d7557768-wvvpt" podUID="21c73844-3235-4a12-9f77-901ba8614e11" containerName="placement-api" containerID="cri-o://f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387" gracePeriod=30 Feb 17 14:29:04 crc kubenswrapper[4836]: I0217 14:29:04.647615 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85d3a41-bec9-4783-a2c6-2e6627156cce" path="/var/lib/kubelet/pods/f85d3a41-bec9-4783-a2c6-2e6627156cce/volumes" Feb 17 14:29:05 crc kubenswrapper[4836]: I0217 14:29:05.261621 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-78c4d587b5-cqhdl" Feb 17 14:29:05 crc kubenswrapper[4836]: I0217 14:29:05.479208 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e6a7955-6cfb-4afe-b94a-8900513e5821","Type":"ContainerStarted","Data":"bad994fb76f9d443edc0fdf20c6b6fc382886e5e32679fe615b7082beecb7dc9"} Feb 17 14:29:05 crc kubenswrapper[4836]: I0217 14:29:05.481529 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e6a7955-6cfb-4afe-b94a-8900513e5821","Type":"ContainerStarted","Data":"0ff66183221e23285e33ce8ba7036e671c6744494c4f6afbcbe3b944a278bb33"} Feb 17 14:29:05 crc kubenswrapper[4836]: I0217 14:29:05.507535 4836 generic.go:334] "Generic (PLEG): container finished" podID="21c73844-3235-4a12-9f77-901ba8614e11" containerID="d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9" exitCode=143 Feb 17 14:29:05 crc kubenswrapper[4836]: I0217 14:29:05.507666 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d7557768-wvvpt" event={"ID":"21c73844-3235-4a12-9f77-901ba8614e11","Type":"ContainerDied","Data":"d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9"} Feb 17 14:29:05 crc kubenswrapper[4836]: I0217 14:29:05.511454 4836 generic.go:334] "Generic (PLEG): container finished" podID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerID="6102d176b1010bbf234d415140cba35d28570c5b514c7edd1c4a0962a14c5149" exitCode=0 Feb 17 14:29:05 crc kubenswrapper[4836]: I0217 14:29:05.511488 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerDied","Data":"6102d176b1010bbf234d415140cba35d28570c5b514c7edd1c4a0962a14c5149"} Feb 17 14:29:06 crc kubenswrapper[4836]: I0217 14:29:06.528198 4836 generic.go:334] "Generic (PLEG): container finished" podID="f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" containerID="852265bc6ffb6ef9657692f454a84caf832b683e76f800e8dccb3317d95a69ea" exitCode=0 Feb 17 14:29:06 crc kubenswrapper[4836]: I0217 14:29:06.528429 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-9z4jp" event={"ID":"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b","Type":"ContainerDied","Data":"852265bc6ffb6ef9657692f454a84caf832b683e76f800e8dccb3317d95a69ea"} Feb 17 14:29:06 crc kubenswrapper[4836]: I0217 14:29:06.532438 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e6a7955-6cfb-4afe-b94a-8900513e5821","Type":"ContainerStarted","Data":"ce53db1025dd2df25f65899e35ad1a8def5e8eb83bb5ab312beb7b67fda33f93"} Feb 17 14:29:06 crc kubenswrapper[4836]: I0217 14:29:06.582063 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.582035321 podStartE2EDuration="4.582035321s" podCreationTimestamp="2026-02-17 14:29:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:06.573794308 +0000 UTC m=+1372.916722577" watchObservedRunningTime="2026-02-17 14:29:06.582035321 +0000 UTC m=+1372.924963590" Feb 17 14:29:07 crc kubenswrapper[4836]: I0217 14:29:07.582010 4836 generic.go:334] "Generic (PLEG): container finished" podID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerID="adc3ef3643d684dbbbf0790a30dd752752d5a28971c3915143c0a6ec314bc365" exitCode=0 Feb 17 14:29:07 crc kubenswrapper[4836]: I0217 14:29:07.582082 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerDied","Data":"adc3ef3643d684dbbbf0790a30dd752752d5a28971c3915143c0a6ec314bc365"} Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.241571 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.326917 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.347732 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-combined-ca-bundle\") pod \"2a1d16f5-4710-43b4-805e-315ed73bb24e\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.348715 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-config-data\") pod \"2a1d16f5-4710-43b4-805e-315ed73bb24e\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.348915 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-config-data\") pod \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.348961 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-run-httpd\") pod \"2a1d16f5-4710-43b4-805e-315ed73bb24e\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.349016 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-scripts\") pod \"2a1d16f5-4710-43b4-805e-315ed73bb24e\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.349072 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-sg-core-conf-yaml\") pod \"2a1d16f5-4710-43b4-805e-315ed73bb24e\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.349119 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpknx\" (UniqueName: \"kubernetes.io/projected/2a1d16f5-4710-43b4-805e-315ed73bb24e-kube-api-access-dpknx\") pod \"2a1d16f5-4710-43b4-805e-315ed73bb24e\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.349146 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-scripts\") pod \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.349170 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-certs\") pod \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.349195 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-log-httpd\") pod \"2a1d16f5-4710-43b4-805e-315ed73bb24e\" (UID: \"2a1d16f5-4710-43b4-805e-315ed73bb24e\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.350244 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2a1d16f5-4710-43b4-805e-315ed73bb24e" (UID: "2a1d16f5-4710-43b4-805e-315ed73bb24e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.350863 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.355454 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2a1d16f5-4710-43b4-805e-315ed73bb24e" (UID: "2a1d16f5-4710-43b4-805e-315ed73bb24e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.357807 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-certs" (OuterVolumeSpecName: "certs") pod "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" (UID: "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.360803 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1d16f5-4710-43b4-805e-315ed73bb24e-kube-api-access-dpknx" (OuterVolumeSpecName: "kube-api-access-dpknx") pod "2a1d16f5-4710-43b4-805e-315ed73bb24e" (UID: "2a1d16f5-4710-43b4-805e-315ed73bb24e"). InnerVolumeSpecName "kube-api-access-dpknx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.369208 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-scripts" (OuterVolumeSpecName: "scripts") pod "2a1d16f5-4710-43b4-805e-315ed73bb24e" (UID: "2a1d16f5-4710-43b4-805e-315ed73bb24e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.380437 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-scripts" (OuterVolumeSpecName: "scripts") pod "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" (UID: "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.406705 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2a1d16f5-4710-43b4-805e-315ed73bb24e" (UID: "2a1d16f5-4710-43b4-805e-315ed73bb24e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.412838 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-config-data" (OuterVolumeSpecName: "config-data") pod "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" (UID: "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.417984 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.464173 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a1d16f5-4710-43b4-805e-315ed73bb24e" (UID: "2a1d16f5-4710-43b4-805e-315ed73bb24e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.472948 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfjpm\" (UniqueName: \"kubernetes.io/projected/21c73844-3235-4a12-9f77-901ba8614e11-kube-api-access-nfjpm\") pod \"21c73844-3235-4a12-9f77-901ba8614e11\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.473067 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-combined-ca-bundle\") pod \"21c73844-3235-4a12-9f77-901ba8614e11\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.473142 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-scripts\") pod \"21c73844-3235-4a12-9f77-901ba8614e11\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.473323 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21c73844-3235-4a12-9f77-901ba8614e11-logs\") pod \"21c73844-3235-4a12-9f77-901ba8614e11\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.473362 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-internal-tls-certs\") pod \"21c73844-3235-4a12-9f77-901ba8614e11\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.473455 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-config-data\") pod \"21c73844-3235-4a12-9f77-901ba8614e11\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.473485 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-combined-ca-bundle\") pod \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.473506 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-public-tls-certs\") pod \"21c73844-3235-4a12-9f77-901ba8614e11\" (UID: \"21c73844-3235-4a12-9f77-901ba8614e11\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.473550 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82gk8\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-kube-api-access-82gk8\") pod \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\" (UID: \"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b\") " Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.478512 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c73844-3235-4a12-9f77-901ba8614e11-logs" (OuterVolumeSpecName: "logs") pod "21c73844-3235-4a12-9f77-901ba8614e11" (UID: "21c73844-3235-4a12-9f77-901ba8614e11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.481729 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-kube-api-access-82gk8" (OuterVolumeSpecName: "kube-api-access-82gk8") pod "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" (UID: "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b"). InnerVolumeSpecName "kube-api-access-82gk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.482556 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c73844-3235-4a12-9f77-901ba8614e11-kube-api-access-nfjpm" (OuterVolumeSpecName: "kube-api-access-nfjpm") pod "21c73844-3235-4a12-9f77-901ba8614e11" (UID: "21c73844-3235-4a12-9f77-901ba8614e11"). InnerVolumeSpecName "kube-api-access-nfjpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488514 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488548 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488565 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21c73844-3235-4a12-9f77-901ba8614e11-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488576 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpknx\" (UniqueName: \"kubernetes.io/projected/2a1d16f5-4710-43b4-805e-315ed73bb24e-kube-api-access-dpknx\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488587 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488598 4836 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488606 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a1d16f5-4710-43b4-805e-315ed73bb24e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488617 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488626 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82gk8\" (UniqueName: \"kubernetes.io/projected/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-kube-api-access-82gk8\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488637 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfjpm\" (UniqueName: \"kubernetes.io/projected/21c73844-3235-4a12-9f77-901ba8614e11-kube-api-access-nfjpm\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.488648 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.519013 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-config-data" (OuterVolumeSpecName: "config-data") pod "2a1d16f5-4710-43b4-805e-315ed73bb24e" (UID: "2a1d16f5-4710-43b4-805e-315ed73bb24e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.534406 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 14:29:08 crc kubenswrapper[4836]: E0217 14:29:08.534991 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="sg-core" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535022 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="sg-core" Feb 17 14:29:08 crc kubenswrapper[4836]: E0217 14:29:08.535051 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="ceilometer-notification-agent" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535059 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="ceilometer-notification-agent" Feb 17 14:29:08 crc kubenswrapper[4836]: E0217 14:29:08.535081 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="ceilometer-central-agent" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535089 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="ceilometer-central-agent" Feb 17 14:29:08 crc kubenswrapper[4836]: E0217 14:29:08.535099 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c73844-3235-4a12-9f77-901ba8614e11" containerName="placement-log" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535107 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c73844-3235-4a12-9f77-901ba8614e11" containerName="placement-log" Feb 17 14:29:08 crc kubenswrapper[4836]: E0217 14:29:08.535115 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="proxy-httpd" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535122 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="proxy-httpd" Feb 17 14:29:08 crc kubenswrapper[4836]: E0217 14:29:08.535137 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" containerName="cloudkitty-storageinit" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535145 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" containerName="cloudkitty-storageinit" Feb 17 14:29:08 crc kubenswrapper[4836]: E0217 14:29:08.535155 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c73844-3235-4a12-9f77-901ba8614e11" containerName="placement-api" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535162 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c73844-3235-4a12-9f77-901ba8614e11" containerName="placement-api" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535408 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="ceilometer-central-agent" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535430 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="ceilometer-notification-agent" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535441 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="proxy-httpd" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535460 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c73844-3235-4a12-9f77-901ba8614e11" containerName="placement-api" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535483 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" containerName="sg-core" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535493 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c73844-3235-4a12-9f77-901ba8614e11" containerName="placement-log" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.535510 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" containerName="cloudkitty-storageinit" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.536563 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.542929 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-scripts" (OuterVolumeSpecName: "scripts") pod "21c73844-3235-4a12-9f77-901ba8614e11" (UID: "21c73844-3235-4a12-9f77-901ba8614e11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.548182 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-fcq7g" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.548592 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.548209 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.553258 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.590480 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drz9c\" (UniqueName: \"kubernetes.io/projected/a8afff37-cd9b-46c0-b407-7c2fb5bada37-kube-api-access-drz9c\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.590587 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.590612 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config-secret\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.590662 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.590872 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.590886 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1d16f5-4710-43b4-805e-315ed73bb24e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.629584 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" (UID: "f38b5f94-bc8b-4e64-abe6-8c39b920cb4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.635251 4836 generic.go:334] "Generic (PLEG): container finished" podID="21c73844-3235-4a12-9f77-901ba8614e11" containerID="f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387" exitCode=0 Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.635570 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55d7557768-wvvpt" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.649470 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.654424 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21c73844-3235-4a12-9f77-901ba8614e11" (UID: "21c73844-3235-4a12-9f77-901ba8614e11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.656461 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-9z4jp" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.683528 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-config-data" (OuterVolumeSpecName: "config-data") pod "21c73844-3235-4a12-9f77-901ba8614e11" (UID: "21c73844-3235-4a12-9f77-901ba8614e11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.692592 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.692644 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config-secret\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.692735 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.692850 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drz9c\" (UniqueName: \"kubernetes.io/projected/a8afff37-cd9b-46c0-b407-7c2fb5bada37-kube-api-access-drz9c\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.692967 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.692979 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.692988 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.694086 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.707606 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.727688 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config-secret\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.746054 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drz9c\" (UniqueName: \"kubernetes.io/projected/a8afff37-cd9b-46c0-b407-7c2fb5bada37-kube-api-access-drz9c\") pod \"openstackclient\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.809640 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "21c73844-3235-4a12-9f77-901ba8614e11" (UID: "21c73844-3235-4a12-9f77-901ba8614e11"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.871662 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "21c73844-3235-4a12-9f77-901ba8614e11" (UID: "21c73844-3235-4a12-9f77-901ba8614e11"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.890945 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.890998 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d7557768-wvvpt" event={"ID":"21c73844-3235-4a12-9f77-901ba8614e11","Type":"ContainerDied","Data":"f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387"} Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.891043 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55d7557768-wvvpt" event={"ID":"21c73844-3235-4a12-9f77-901ba8614e11","Type":"ContainerDied","Data":"bbe789caf6ed33cc607fbf4e010b5eb03468b6cfaedd4af371c447ef9c0fa67b"} Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.891066 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a1d16f5-4710-43b4-805e-315ed73bb24e","Type":"ContainerDied","Data":"725a655ac601adcaa8185b937f6643704390b16c79c731f2de3ba649c346ef2b"} Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.891084 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-9z4jp" event={"ID":"f38b5f94-bc8b-4e64-abe6-8c39b920cb4b","Type":"ContainerDied","Data":"bcf545502020f0699c5847e3cc9076fff2937319f46acd5c1c65027d98b9be99"} Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.891104 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf545502020f0699c5847e3cc9076fff2937319f46acd5c1c65027d98b9be99" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.891128 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.893145 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.893197 4836 scope.go:117] "RemoveContainer" containerID="f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.897123 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.897538 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.902035 4836 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.902078 4836 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c73844-3235-4a12-9f77-901ba8614e11-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.903445 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.903672 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.906748 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-l28cf" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.930885 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.940232 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.964956 4836 scope.go:117] "RemoveContainer" containerID="d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9" Feb 17 14:29:08 crc kubenswrapper[4836]: I0217 14:29:08.993114 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-cjz8m"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.004962 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-certs\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.005031 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-scripts\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.005065 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.005083 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.005105 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gqz9\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-kube-api-access-8gqz9\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.005363 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.012909 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.040061 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-cjz8m"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.125224 4836 scope.go:117] "RemoveContainer" containerID="f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387" Feb 17 14:29:09 crc kubenswrapper[4836]: E0217 14:29:09.128860 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387\": container with ID starting with f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387 not found: ID does not exist" containerID="f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.128932 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387"} err="failed to get container status \"f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387\": rpc error: code = NotFound desc = could not find container \"f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387\": container with ID starting with f44d059f8118ff4bd369e70f4ee21eeb5fe30846206d89ff637f2b1a0a5be387 not found: ID does not exist" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.128977 4836 scope.go:117] "RemoveContainer" containerID="d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9" Feb 17 14:29:09 crc kubenswrapper[4836]: E0217 14:29:09.130880 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9\": container with ID starting with d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9 not found: ID does not exist" containerID="d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.130924 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9"} err="failed to get container status \"d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9\": rpc error: code = NotFound desc = could not find container \"d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9\": container with ID starting with d7a1e6a91c2fd3547de1abbc6562e149219faa94982d5287b5144a4b6b8eb8b9 not found: ID does not exist" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.130953 4836 scope.go:117] "RemoveContainer" containerID="00d436156aa07858f79630bc19852984525e2688b6a3d2302eeae168425ab6a8" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.133322 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tzl2\" (UniqueName: \"kubernetes.io/projected/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-kube-api-access-9tzl2\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.133442 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.133503 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-certs\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.133531 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.133607 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-scripts\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.133631 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.133986 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.134027 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.134045 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gqz9\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-kube-api-access-8gqz9\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.134072 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-config\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.134409 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-svc\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.134444 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.150324 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-scripts\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.153819 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.161516 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.161672 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.173769 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-certs\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.190029 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gqz9\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-kube-api-access-8gqz9\") pod \"cloudkitty-proc-0\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.204044 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.229881 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.235931 4836 scope.go:117] "RemoveContainer" containerID="b11cf843196ed96ab329470f8fb90c845e937e84667798d3853568520da77e41" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.238691 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.238766 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-config\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.239016 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-svc\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.239088 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tzl2\" (UniqueName: \"kubernetes.io/projected/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-kube-api-access-9tzl2\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.239210 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.239265 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.239692 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.240153 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.240276 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-svc\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.241318 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-config\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.242006 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.251760 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.258927 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.261340 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.263358 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tzl2\" (UniqueName: \"kubernetes.io/projected/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-kube-api-access-9tzl2\") pod \"dnsmasq-dns-67bdc55879-cjz8m\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.273212 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.277154 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.284760 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.285621 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:29:09 crc kubenswrapper[4836]: E0217 14:29:09.294744 4836 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 14:29:09 crc kubenswrapper[4836]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a8afff37-cd9b-46c0-b407-7c2fb5bada37_0(2ca36da22407de10e49ceddacef100ff3f094b59a7a992ff8a6053e904e4c186): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ca36da22407de10e49ceddacef100ff3f094b59a7a992ff8a6053e904e4c186" Netns:"/var/run/netns/6f100115-555d-4d51-932f-d431d3cb1f50" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=2ca36da22407de10e49ceddacef100ff3f094b59a7a992ff8a6053e904e4c186;K8S_POD_UID=a8afff37-cd9b-46c0-b407-7c2fb5bada37" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a8afff37-cd9b-46c0-b407-7c2fb5bada37]: expected pod UID "a8afff37-cd9b-46c0-b407-7c2fb5bada37" but got "4fe674a8-c32b-412e-8d20-2a6e7e18bb10" from Kube API Feb 17 14:29:09 crc kubenswrapper[4836]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 14:29:09 crc kubenswrapper[4836]: > Feb 17 14:29:09 crc kubenswrapper[4836]: E0217 14:29:09.295425 4836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 14:29:09 crc kubenswrapper[4836]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a8afff37-cd9b-46c0-b407-7c2fb5bada37_0(2ca36da22407de10e49ceddacef100ff3f094b59a7a992ff8a6053e904e4c186): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ca36da22407de10e49ceddacef100ff3f094b59a7a992ff8a6053e904e4c186" Netns:"/var/run/netns/6f100115-555d-4d51-932f-d431d3cb1f50" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=2ca36da22407de10e49ceddacef100ff3f094b59a7a992ff8a6053e904e4c186;K8S_POD_UID=a8afff37-cd9b-46c0-b407-7c2fb5bada37" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a8afff37-cd9b-46c0-b407-7c2fb5bada37]: expected pod UID "a8afff37-cd9b-46c0-b407-7c2fb5bada37" but got "4fe674a8-c32b-412e-8d20-2a6e7e18bb10" from Kube API Feb 17 14:29:09 crc kubenswrapper[4836]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 14:29:09 crc kubenswrapper[4836]: > pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.297691 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-55d7557768-wvvpt"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.320427 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.326971 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.328567 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.341823 4836 scope.go:117] "RemoveContainer" containerID="adc3ef3643d684dbbbf0790a30dd752752d5a28971c3915143c0a6ec314bc365" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.356879 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-55d7557768-wvvpt"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.357260 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8pml\" (UniqueName: \"kubernetes.io/projected/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-kube-api-access-t8pml\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.357439 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358019 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-config-data\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358136 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-openstack-config\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358166 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-log-httpd\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358340 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pcw\" (UniqueName: \"kubernetes.io/projected/d9f887f5-6ce0-4320-94fc-024b1b9ef725-kube-api-access-b7pcw\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358447 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358560 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-openstack-config-secret\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358589 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-scripts\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358675 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.358809 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-run-httpd\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.366453 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.385330 4836 scope.go:117] "RemoveContainer" containerID="6102d176b1010bbf234d415140cba35d28570c5b514c7edd1c4a0962a14c5149" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.426434 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.430154 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.443919 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.444040 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.447019 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462124 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-run-httpd\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462321 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8pml\" (UniqueName: \"kubernetes.io/projected/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-kube-api-access-t8pml\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462394 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462448 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-config-data\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462491 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-openstack-config\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462514 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-log-httpd\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462556 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7pcw\" (UniqueName: \"kubernetes.io/projected/d9f887f5-6ce0-4320-94fc-024b1b9ef725-kube-api-access-b7pcw\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462603 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462647 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-openstack-config-secret\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462680 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-scripts\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462712 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.462951 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-run-httpd\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.463323 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-log-httpd\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.465047 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-openstack-config\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.468701 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.469110 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.469977 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.470416 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-scripts\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.470457 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-config-data\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.471202 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-openstack-config-secret\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.486867 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7pcw\" (UniqueName: \"kubernetes.io/projected/d9f887f5-6ce0-4320-94fc-024b1b9ef725-kube-api-access-b7pcw\") pod \"ceilometer-0\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.497715 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8pml\" (UniqueName: \"kubernetes.io/projected/4fe674a8-c32b-412e-8d20-2a6e7e18bb10-kube-api-access-t8pml\") pod \"openstackclient\" (UID: \"4fe674a8-c32b-412e-8d20-2a6e7e18bb10\") " pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.566277 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.566965 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.567086 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-certs\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.567148 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.567235 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.567272 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb10908e-7be1-4ca0-8743-7f9aaae820b7-logs\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.567327 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ftfn\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-kube-api-access-7ftfn\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.615303 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.656807 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.669840 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb10908e-7be1-4ca0-8743-7f9aaae820b7-logs\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.669923 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ftfn\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-kube-api-access-7ftfn\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.670035 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.670067 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.670174 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-certs\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.670252 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.670354 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.679093 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.683562 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.686348 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.688942 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-certs\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.690051 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb10908e-7be1-4ca0-8743-7f9aaae820b7-logs\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.721664 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.766511 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ftfn\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-kube-api-access-7ftfn\") pod \"cloudkitty-api-0\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.777587 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.792500 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.802820 4836 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a8afff37-cd9b-46c0-b407-7c2fb5bada37" podUID="4fe674a8-c32b-412e-8d20-2a6e7e18bb10" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.817709 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.884848 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config-secret\") pod \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.890004 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-combined-ca-bundle\") pod \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.890627 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config\") pod \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.890935 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drz9c\" (UniqueName: \"kubernetes.io/projected/a8afff37-cd9b-46c0-b407-7c2fb5bada37-kube-api-access-drz9c\") pod \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\" (UID: \"a8afff37-cd9b-46c0-b407-7c2fb5bada37\") " Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.891067 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a8afff37-cd9b-46c0-b407-7c2fb5bada37" (UID: "a8afff37-cd9b-46c0-b407-7c2fb5bada37"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.892683 4836 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.893896 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a8afff37-cd9b-46c0-b407-7c2fb5bada37" (UID: "a8afff37-cd9b-46c0-b407-7c2fb5bada37"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.895454 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8afff37-cd9b-46c0-b407-7c2fb5bada37" (UID: "a8afff37-cd9b-46c0-b407-7c2fb5bada37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.910181 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8afff37-cd9b-46c0-b407-7c2fb5bada37-kube-api-access-drz9c" (OuterVolumeSpecName: "kube-api-access-drz9c") pod "a8afff37-cd9b-46c0-b407-7c2fb5bada37" (UID: "a8afff37-cd9b-46c0-b407-7c2fb5bada37"). InnerVolumeSpecName "kube-api-access-drz9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.995116 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drz9c\" (UniqueName: \"kubernetes.io/projected/a8afff37-cd9b-46c0-b407-7c2fb5bada37-kube-api-access-drz9c\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.995169 4836 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:09 crc kubenswrapper[4836]: I0217 14:29:09.995193 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8afff37-cd9b-46c0-b407-7c2fb5bada37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.048005 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.308276 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-cjz8m"] Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.610426 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c73844-3235-4a12-9f77-901ba8614e11" path="/var/lib/kubelet/pods/21c73844-3235-4a12-9f77-901ba8614e11/volumes" Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.611879 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1d16f5-4710-43b4-805e-315ed73bb24e" path="/var/lib/kubelet/pods/2a1d16f5-4710-43b4-805e-315ed73bb24e/volumes" Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.614208 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8afff37-cd9b-46c0-b407-7c2fb5bada37" path="/var/lib/kubelet/pods/a8afff37-cd9b-46c0-b407-7c2fb5bada37/volumes" Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.648761 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 14:29:10 crc kubenswrapper[4836]: W0217 14:29:10.680213 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fe674a8_c32b_412e_8d20_2a6e7e18bb10.slice/crio-8d85d78a74909b5d4e4e057b867c51974828a505741fdff580c1df982a520ea3 WatchSource:0}: Error finding container 8d85d78a74909b5d4e4e057b867c51974828a505741fdff580c1df982a520ea3: Status 404 returned error can't find the container with id 8d85d78a74909b5d4e4e057b867c51974828a505741fdff580c1df982a520ea3 Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.754142 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.816829 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4fe674a8-c32b-412e-8d20-2a6e7e18bb10","Type":"ContainerStarted","Data":"8d85d78a74909b5d4e4e057b867c51974828a505741fdff580c1df982a520ea3"} Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.820252 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerStarted","Data":"2b8910649b123c250a9b2ae2a0273df5052a76cf9ac3a4d666b31acdde9dcd6e"} Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.833662 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"39d3cbf1-d107-4004-9eec-698f8f4360b9","Type":"ContainerStarted","Data":"9d18961b4f807b2d078a92a071d329120e24d89e463eadbb04ec662d87231dc8"} Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.842197 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.842742 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" event={"ID":"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d","Type":"ContainerStarted","Data":"3d5f1259a1d6811a1bf928961a17e403bc60cdb65dc5d67063f562d7b7e44223"} Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.848861 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:10 crc kubenswrapper[4836]: I0217 14:29:10.899569 4836 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a8afff37-cd9b-46c0-b407-7c2fb5bada37" podUID="4fe674a8-c32b-412e-8d20-2a6e7e18bb10" Feb 17 14:29:11 crc kubenswrapper[4836]: I0217 14:29:11.874027 4836 generic.go:334] "Generic (PLEG): container finished" podID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerID="1fc9116efed5aa1cde1e1851a8feece763300523cbdc4d6253a5c08f4f4f9f36" exitCode=0 Feb 17 14:29:11 crc kubenswrapper[4836]: I0217 14:29:11.874206 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" event={"ID":"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d","Type":"ContainerDied","Data":"1fc9116efed5aa1cde1e1851a8feece763300523cbdc4d6253a5c08f4f4f9f36"} Feb 17 14:29:11 crc kubenswrapper[4836]: I0217 14:29:11.900160 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb10908e-7be1-4ca0-8743-7f9aaae820b7","Type":"ContainerStarted","Data":"82bc2e3a70ecec92d3994393a44d6752f39e87009a67fa6ce836cf5fea4e8d25"} Feb 17 14:29:11 crc kubenswrapper[4836]: I0217 14:29:11.900226 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb10908e-7be1-4ca0-8743-7f9aaae820b7","Type":"ContainerStarted","Data":"fef50b67c5370254f40d86b0f2cec5c6baf88547e44514223d3def4388ffb9b9"} Feb 17 14:29:11 crc kubenswrapper[4836]: I0217 14:29:11.900239 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb10908e-7be1-4ca0-8743-7f9aaae820b7","Type":"ContainerStarted","Data":"d9850d73e5e57596ca879cc8b2c2875e4f65ab2930249ef121a2ca2c42da234e"} Feb 17 14:29:11 crc kubenswrapper[4836]: I0217 14:29:11.900323 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 17 14:29:11 crc kubenswrapper[4836]: I0217 14:29:11.985130 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.98509147 podStartE2EDuration="2.98509147s" podCreationTimestamp="2026-02-17 14:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:11.937148184 +0000 UTC m=+1378.280076463" watchObservedRunningTime="2026-02-17 14:29:11.98509147 +0000 UTC m=+1378.328019749" Feb 17 14:29:12 crc kubenswrapper[4836]: I0217 14:29:12.723435 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:13 crc kubenswrapper[4836]: I0217 14:29:13.939085 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"39d3cbf1-d107-4004-9eec-698f8f4360b9","Type":"ContainerStarted","Data":"9ee60ada822c522c9249d0e3c31f511d939804abdb610bce124e951b7000a09d"} Feb 17 14:29:13 crc kubenswrapper[4836]: I0217 14:29:13.948010 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" event={"ID":"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d","Type":"ContainerStarted","Data":"407f5678203e5e174c01300835b55b61252a1ab248014426970911ab531d756b"} Feb 17 14:29:13 crc kubenswrapper[4836]: I0217 14:29:13.948165 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:13 crc kubenswrapper[4836]: I0217 14:29:13.950242 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerName="cloudkitty-api-log" containerID="cri-o://fef50b67c5370254f40d86b0f2cec5c6baf88547e44514223d3def4388ffb9b9" gracePeriod=30 Feb 17 14:29:13 crc kubenswrapper[4836]: I0217 14:29:13.950542 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerStarted","Data":"fb5cf9ee8d101cc6fae1fb5f79f35c27c28cf9fc0fa0631bc345f006efba64c9"} Feb 17 14:29:13 crc kubenswrapper[4836]: I0217 14:29:13.950610 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerName="cloudkitty-api" containerID="cri-o://82bc2e3a70ecec92d3994393a44d6752f39e87009a67fa6ce836cf5fea4e8d25" gracePeriod=30 Feb 17 14:29:13 crc kubenswrapper[4836]: I0217 14:29:13.971654 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.838848005 podStartE2EDuration="5.971626155s" podCreationTimestamp="2026-02-17 14:29:08 +0000 UTC" firstStartedPulling="2026-02-17 14:29:10.299594533 +0000 UTC m=+1376.642522802" lastFinishedPulling="2026-02-17 14:29:13.432372683 +0000 UTC m=+1379.775300952" observedRunningTime="2026-02-17 14:29:13.960916066 +0000 UTC m=+1380.303844335" watchObservedRunningTime="2026-02-17 14:29:13.971626155 +0000 UTC m=+1380.314554424" Feb 17 14:29:14 crc kubenswrapper[4836]: I0217 14:29:14.016989 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:14 crc kubenswrapper[4836]: I0217 14:29:14.029770 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" podStartSLOduration=6.029737255 podStartE2EDuration="6.029737255s" podCreationTimestamp="2026-02-17 14:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:13.988192693 +0000 UTC m=+1380.331120962" watchObservedRunningTime="2026-02-17 14:29:14.029737255 +0000 UTC m=+1380.372665524" Feb 17 14:29:14 crc kubenswrapper[4836]: I0217 14:29:14.231956 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.002679 4836 generic.go:334] "Generic (PLEG): container finished" podID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerID="82bc2e3a70ecec92d3994393a44d6752f39e87009a67fa6ce836cf5fea4e8d25" exitCode=0 Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.003268 4836 generic.go:334] "Generic (PLEG): container finished" podID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerID="fef50b67c5370254f40d86b0f2cec5c6baf88547e44514223d3def4388ffb9b9" exitCode=143 Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.002797 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb10908e-7be1-4ca0-8743-7f9aaae820b7","Type":"ContainerDied","Data":"82bc2e3a70ecec92d3994393a44d6752f39e87009a67fa6ce836cf5fea4e8d25"} Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.003434 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb10908e-7be1-4ca0-8743-7f9aaae820b7","Type":"ContainerDied","Data":"fef50b67c5370254f40d86b0f2cec5c6baf88547e44514223d3def4388ffb9b9"} Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.024427 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerStarted","Data":"3af1c0902b859232ebd0fe7d2dd1bbcfb19e56b6f4b1d314aace0818279210a3"} Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.205315 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.298517 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-scripts\") pod \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.298995 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data\") pod \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.299059 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb10908e-7be1-4ca0-8743-7f9aaae820b7-logs\") pod \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.299129 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-certs\") pod \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.299156 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-combined-ca-bundle\") pod \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.299216 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ftfn\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-kube-api-access-7ftfn\") pod \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.299280 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data-custom\") pod \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\" (UID: \"bb10908e-7be1-4ca0-8743-7f9aaae820b7\") " Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.300388 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb10908e-7be1-4ca0-8743-7f9aaae820b7-logs" (OuterVolumeSpecName: "logs") pod "bb10908e-7be1-4ca0-8743-7f9aaae820b7" (UID: "bb10908e-7be1-4ca0-8743-7f9aaae820b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.310556 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb10908e-7be1-4ca0-8743-7f9aaae820b7-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.600408 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6fc4994bf7-cqhhj" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.608332 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-certs" (OuterVolumeSpecName: "certs") pod "bb10908e-7be1-4ca0-8743-7f9aaae820b7" (UID: "bb10908e-7be1-4ca0-8743-7f9aaae820b7"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.608474 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb10908e-7be1-4ca0-8743-7f9aaae820b7" (UID: "bb10908e-7be1-4ca0-8743-7f9aaae820b7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.608732 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-scripts" (OuterVolumeSpecName: "scripts") pod "bb10908e-7be1-4ca0-8743-7f9aaae820b7" (UID: "bb10908e-7be1-4ca0-8743-7f9aaae820b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.609333 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-kube-api-access-7ftfn" (OuterVolumeSpecName: "kube-api-access-7ftfn") pod "bb10908e-7be1-4ca0-8743-7f9aaae820b7" (UID: "bb10908e-7be1-4ca0-8743-7f9aaae820b7"). InnerVolumeSpecName "kube-api-access-7ftfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.614755 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb10908e-7be1-4ca0-8743-7f9aaae820b7" (UID: "bb10908e-7be1-4ca0-8743-7f9aaae820b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.616970 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data" (OuterVolumeSpecName: "config-data") pod "bb10908e-7be1-4ca0-8743-7f9aaae820b7" (UID: "bb10908e-7be1-4ca0-8743-7f9aaae820b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.653424 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.653809 4836 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.653911 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.653995 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ftfn\" (UniqueName: \"kubernetes.io/projected/bb10908e-7be1-4ca0-8743-7f9aaae820b7-kube-api-access-7ftfn\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.654076 4836 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.654158 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb10908e-7be1-4ca0-8743-7f9aaae820b7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.711060 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56bdc657f6-lhdd4"] Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.711449 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56bdc657f6-lhdd4" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerName="neutron-api" containerID="cri-o://b7a5e210ee7a505ae087f3c56329942b71db962383e4ae1693812dd8340169c8" gracePeriod=30 Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.711540 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56bdc657f6-lhdd4" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerName="neutron-httpd" containerID="cri-o://d2098b2a7c4dcbee4fa27ea9bfa1c19e32c5f83e96aa663b877abb8284852c74" gracePeriod=30 Feb 17 14:29:15 crc kubenswrapper[4836]: I0217 14:29:15.993933 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.089191 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerStarted","Data":"b3a6771b8e2f194ac7bcb94abab4f0e58b19807bc132dac6154a588305752da0"} Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.091631 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="39d3cbf1-d107-4004-9eec-698f8f4360b9" containerName="cloudkitty-proc" containerID="cri-o://9ee60ada822c522c9249d0e3c31f511d939804abdb610bce124e951b7000a09d" gracePeriod=30 Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.091973 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.106249 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb10908e-7be1-4ca0-8743-7f9aaae820b7","Type":"ContainerDied","Data":"d9850d73e5e57596ca879cc8b2c2875e4f65ab2930249ef121a2ca2c42da234e"} Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.106340 4836 scope.go:117] "RemoveContainer" containerID="82bc2e3a70ecec92d3994393a44d6752f39e87009a67fa6ce836cf5fea4e8d25" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.194367 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.238560 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.273104 4836 scope.go:117] "RemoveContainer" containerID="fef50b67c5370254f40d86b0f2cec5c6baf88547e44514223d3def4388ffb9b9" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.273261 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:16 crc kubenswrapper[4836]: E0217 14:29:16.273790 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerName="cloudkitty-api-log" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.273808 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerName="cloudkitty-api-log" Feb 17 14:29:16 crc kubenswrapper[4836]: E0217 14:29:16.273860 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerName="cloudkitty-api" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.273870 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerName="cloudkitty-api" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.274222 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerName="cloudkitty-api" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.274255 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" containerName="cloudkitty-api-log" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.292020 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.299026 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.299375 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.316775 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.356032 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.388896 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.409981 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.410076 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-scripts\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.410170 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-logs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.410396 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.410491 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.410542 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-config-data\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.410678 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.410717 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmlr\" (UniqueName: \"kubernetes.io/projected/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-kube-api-access-wxmlr\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.523626 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.523720 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-scripts\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.523792 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-logs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.523914 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.523992 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.524029 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-config-data\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.524102 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxmlr\" (UniqueName: \"kubernetes.io/projected/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-kube-api-access-wxmlr\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.524143 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.524238 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.536229 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-logs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.590874 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.592185 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-config-data\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.592906 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.593378 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-scripts\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.600044 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.615041 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxmlr\" (UniqueName: \"kubernetes.io/projected/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-kube-api-access-wxmlr\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.631323 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.648822 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb10908e-7be1-4ca0-8743-7f9aaae820b7" path="/var/lib/kubelet/pods/bb10908e-7be1-4ca0-8743-7f9aaae820b7/volumes" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.687858 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49\") " pod="openstack/cloudkitty-api-0" Feb 17 14:29:16 crc kubenswrapper[4836]: I0217 14:29:16.714597 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 14:29:17 crc kubenswrapper[4836]: I0217 14:29:17.151586 4836 generic.go:334] "Generic (PLEG): container finished" podID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerID="d2098b2a7c4dcbee4fa27ea9bfa1c19e32c5f83e96aa663b877abb8284852c74" exitCode=0 Feb 17 14:29:17 crc kubenswrapper[4836]: I0217 14:29:17.151772 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bdc657f6-lhdd4" event={"ID":"10f74a60-5438-45cd-a8e1-74ccc1c3b16a","Type":"ContainerDied","Data":"d2098b2a7c4dcbee4fa27ea9bfa1c19e32c5f83e96aa663b877abb8284852c74"} Feb 17 14:29:17 crc kubenswrapper[4836]: I0217 14:29:17.879839 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 14:29:18 crc kubenswrapper[4836]: I0217 14:29:18.199678 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49","Type":"ContainerStarted","Data":"171bb4aa1094f8a0584a040c722a33e6019383c4680c5fcc2e47aa3bfa0b5335"} Feb 17 14:29:18 crc kubenswrapper[4836]: I0217 14:29:18.924270 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="8722776f-950d-46d6-8929-164cc70747af" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.185:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.301581 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49","Type":"ContainerStarted","Data":"b182c0e43bd062d36f7a248e3b4dc068a23d4ee84664564a49eac1b47e4ae8bd"} Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.301642 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49","Type":"ContainerStarted","Data":"03b77758d4164ee76b6cd9772d66859d23a2ed9d8f68d0c6fcf072d038fdaabe"} Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.302476 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.333353 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerStarted","Data":"c8b99a1c879fa5cc6c1f47dcb0736390ea6d8c4736e3f5b0bc65697ec35d7092"} Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.333854 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.337148 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.33711234 podStartE2EDuration="3.33711234s" podCreationTimestamp="2026-02-17 14:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:19.333163233 +0000 UTC m=+1385.676091502" watchObservedRunningTime="2026-02-17 14:29:19.33711234 +0000 UTC m=+1385.680040609" Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.368274 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.651975759 podStartE2EDuration="11.368248131s" podCreationTimestamp="2026-02-17 14:29:08 +0000 UTC" firstStartedPulling="2026-02-17 14:29:10.785218917 +0000 UTC m=+1377.128147186" lastFinishedPulling="2026-02-17 14:29:17.501491299 +0000 UTC m=+1383.844419558" observedRunningTime="2026-02-17 14:29:19.360737817 +0000 UTC m=+1385.703666096" watchObservedRunningTime="2026-02-17 14:29:19.368248131 +0000 UTC m=+1385.711176400" Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.428514 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.529648 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nvkvs"] Feb 17 14:29:19 crc kubenswrapper[4836]: I0217 14:29:19.530131 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" podUID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" containerName="dnsmasq-dns" containerID="cri-o://116ce92f31628ecf8d5384bc487f6288540b5a8b08da5572838c3c49083bb344" gracePeriod=10 Feb 17 14:29:20 crc kubenswrapper[4836]: I0217 14:29:20.455900 4836 generic.go:334] "Generic (PLEG): container finished" podID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerID="b7a5e210ee7a505ae087f3c56329942b71db962383e4ae1693812dd8340169c8" exitCode=0 Feb 17 14:29:20 crc kubenswrapper[4836]: I0217 14:29:20.456025 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bdc657f6-lhdd4" event={"ID":"10f74a60-5438-45cd-a8e1-74ccc1c3b16a","Type":"ContainerDied","Data":"b7a5e210ee7a505ae087f3c56329942b71db962383e4ae1693812dd8340169c8"} Feb 17 14:29:20 crc kubenswrapper[4836]: I0217 14:29:20.469158 4836 generic.go:334] "Generic (PLEG): container finished" podID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" containerID="116ce92f31628ecf8d5384bc487f6288540b5a8b08da5572838c3c49083bb344" exitCode=0 Feb 17 14:29:20 crc kubenswrapper[4836]: I0217 14:29:20.470508 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" event={"ID":"79b71acb-6b55-4f99-8b13-0c5aea065cbb","Type":"ContainerDied","Data":"116ce92f31628ecf8d5384bc487f6288540b5a8b08da5572838c3c49083bb344"} Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.048243 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.137533 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-nb\") pod \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.137616 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-svc\") pod \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.137710 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-config\") pod \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.137841 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvggg\" (UniqueName: \"kubernetes.io/projected/79b71acb-6b55-4f99-8b13-0c5aea065cbb-kube-api-access-zvggg\") pod \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.137930 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-sb\") pod \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.137976 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-swift-storage-0\") pod \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\" (UID: \"79b71acb-6b55-4f99-8b13-0c5aea065cbb\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.152024 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b71acb-6b55-4f99-8b13-0c5aea065cbb-kube-api-access-zvggg" (OuterVolumeSpecName: "kube-api-access-zvggg") pod "79b71acb-6b55-4f99-8b13-0c5aea065cbb" (UID: "79b71acb-6b55-4f99-8b13-0c5aea065cbb"). InnerVolumeSpecName "kube-api-access-zvggg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.175545 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="8722776f-950d-46d6-8929-164cc70747af" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.185:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.431848 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79b71acb-6b55-4f99-8b13-0c5aea065cbb" (UID: "79b71acb-6b55-4f99-8b13-0c5aea065cbb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.432793 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79b71acb-6b55-4f99-8b13-0c5aea065cbb" (UID: "79b71acb-6b55-4f99-8b13-0c5aea065cbb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.438535 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79b71acb-6b55-4f99-8b13-0c5aea065cbb" (UID: "79b71acb-6b55-4f99-8b13-0c5aea065cbb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.440740 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.440782 4836 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.440793 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.440802 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvggg\" (UniqueName: \"kubernetes.io/projected/79b71acb-6b55-4f99-8b13-0c5aea065cbb-kube-api-access-zvggg\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.453501 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79b71acb-6b55-4f99-8b13-0c5aea065cbb" (UID: "79b71acb-6b55-4f99-8b13-0c5aea065cbb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.480032 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-config" (OuterVolumeSpecName: "config") pod "79b71acb-6b55-4f99-8b13-0c5aea065cbb" (UID: "79b71acb-6b55-4f99-8b13-0c5aea065cbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.503706 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bdc657f6-lhdd4" event={"ID":"10f74a60-5438-45cd-a8e1-74ccc1c3b16a","Type":"ContainerDied","Data":"3131621aad6bddf8f2539d514b9526e7c3c20a9b86076d983784e09cb9285473"} Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.503765 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3131621aad6bddf8f2539d514b9526e7c3c20a9b86076d983784e09cb9285473" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.506078 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.506686 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" event={"ID":"79b71acb-6b55-4f99-8b13-0c5aea065cbb","Type":"ContainerDied","Data":"1fe3b4e682953cc1e2a6a78ba19a4ca238a5effc3b1823c6d9c0ce3876e226a4"} Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.506738 4836 scope.go:117] "RemoveContainer" containerID="116ce92f31628ecf8d5384bc487f6288540b5a8b08da5572838c3c49083bb344" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.506869 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nvkvs" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.545121 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-combined-ca-bundle\") pod \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.545256 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-ovndb-tls-certs\") pod \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.553550 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-config\") pod \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.553610 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-httpd-config\") pod \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.553969 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whzb4\" (UniqueName: \"kubernetes.io/projected/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-kube-api-access-whzb4\") pod \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\" (UID: \"10f74a60-5438-45cd-a8e1-74ccc1c3b16a\") " Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.556186 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.556205 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79b71acb-6b55-4f99-8b13-0c5aea065cbb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.562441 4836 scope.go:117] "RemoveContainer" containerID="277bd33eae834b988e7c295c653ee707631d0efdc5453cfacb6a97be01ceb016" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.572471 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "10f74a60-5438-45cd-a8e1-74ccc1c3b16a" (UID: "10f74a60-5438-45cd-a8e1-74ccc1c3b16a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.579720 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-kube-api-access-whzb4" (OuterVolumeSpecName: "kube-api-access-whzb4") pod "10f74a60-5438-45cd-a8e1-74ccc1c3b16a" (UID: "10f74a60-5438-45cd-a8e1-74ccc1c3b16a"). InnerVolumeSpecName "kube-api-access-whzb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.617367 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nvkvs"] Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.637629 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nvkvs"] Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.663333 4836 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.663392 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whzb4\" (UniqueName: \"kubernetes.io/projected/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-kube-api-access-whzb4\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.672037 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10f74a60-5438-45cd-a8e1-74ccc1c3b16a" (UID: "10f74a60-5438-45cd-a8e1-74ccc1c3b16a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.688518 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "10f74a60-5438-45cd-a8e1-74ccc1c3b16a" (UID: "10f74a60-5438-45cd-a8e1-74ccc1c3b16a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.696549 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-config" (OuterVolumeSpecName: "config") pod "10f74a60-5438-45cd-a8e1-74ccc1c3b16a" (UID: "10f74a60-5438-45cd-a8e1-74ccc1c3b16a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.766397 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.766472 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:21 crc kubenswrapper[4836]: I0217 14:29:21.766486 4836 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f74a60-5438-45cd-a8e1-74ccc1c3b16a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.528859 4836 generic.go:334] "Generic (PLEG): container finished" podID="39d3cbf1-d107-4004-9eec-698f8f4360b9" containerID="9ee60ada822c522c9249d0e3c31f511d939804abdb610bce124e951b7000a09d" exitCode=0 Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.528976 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"39d3cbf1-d107-4004-9eec-698f8f4360b9","Type":"ContainerDied","Data":"9ee60ada822c522c9249d0e3c31f511d939804abdb610bce124e951b7000a09d"} Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.529432 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"39d3cbf1-d107-4004-9eec-698f8f4360b9","Type":"ContainerDied","Data":"9d18961b4f807b2d078a92a071d329120e24d89e463eadbb04ec662d87231dc8"} Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.529454 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d18961b4f807b2d078a92a071d329120e24d89e463eadbb04ec662d87231dc8" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.537959 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bdc657f6-lhdd4" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.560601 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.835969 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data-custom\") pod \"39d3cbf1-d107-4004-9eec-698f8f4360b9\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.836090 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-scripts\") pod \"39d3cbf1-d107-4004-9eec-698f8f4360b9\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.836172 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data\") pod \"39d3cbf1-d107-4004-9eec-698f8f4360b9\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.836373 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-combined-ca-bundle\") pod \"39d3cbf1-d107-4004-9eec-698f8f4360b9\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.836407 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-certs\") pod \"39d3cbf1-d107-4004-9eec-698f8f4360b9\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.836444 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gqz9\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-kube-api-access-8gqz9\") pod \"39d3cbf1-d107-4004-9eec-698f8f4360b9\" (UID: \"39d3cbf1-d107-4004-9eec-698f8f4360b9\") " Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.845601 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-scripts" (OuterVolumeSpecName: "scripts") pod "39d3cbf1-d107-4004-9eec-698f8f4360b9" (UID: "39d3cbf1-d107-4004-9eec-698f8f4360b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.846257 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-kube-api-access-8gqz9" (OuterVolumeSpecName: "kube-api-access-8gqz9") pod "39d3cbf1-d107-4004-9eec-698f8f4360b9" (UID: "39d3cbf1-d107-4004-9eec-698f8f4360b9"). InnerVolumeSpecName "kube-api-access-8gqz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.849019 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" path="/var/lib/kubelet/pods/79b71acb-6b55-4f99-8b13-0c5aea065cbb/volumes" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.857993 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "39d3cbf1-d107-4004-9eec-698f8f4360b9" (UID: "39d3cbf1-d107-4004-9eec-698f8f4360b9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.878727 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-certs" (OuterVolumeSpecName: "certs") pod "39d3cbf1-d107-4004-9eec-698f8f4360b9" (UID: "39d3cbf1-d107-4004-9eec-698f8f4360b9"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.897946 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56bdc657f6-lhdd4"] Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.913582 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data" (OuterVolumeSpecName: "config-data") pod "39d3cbf1-d107-4004-9eec-698f8f4360b9" (UID: "39d3cbf1-d107-4004-9eec-698f8f4360b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.940425 4836 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.940469 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gqz9\" (UniqueName: \"kubernetes.io/projected/39d3cbf1-d107-4004-9eec-698f8f4360b9-kube-api-access-8gqz9\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.940485 4836 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.940496 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.940507 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:22 crc kubenswrapper[4836]: I0217 14:29:22.941633 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56bdc657f6-lhdd4"] Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.044573 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39d3cbf1-d107-4004-9eec-698f8f4360b9" (UID: "39d3cbf1-d107-4004-9eec-698f8f4360b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.145722 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d3cbf1-d107-4004-9eec-698f8f4360b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.547714 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.599684 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.618993 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.655258 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:23 crc kubenswrapper[4836]: E0217 14:29:23.659584 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerName="neutron-api" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.659732 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerName="neutron-api" Feb 17 14:29:23 crc kubenswrapper[4836]: E0217 14:29:23.659872 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d3cbf1-d107-4004-9eec-698f8f4360b9" containerName="cloudkitty-proc" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.659885 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d3cbf1-d107-4004-9eec-698f8f4360b9" containerName="cloudkitty-proc" Feb 17 14:29:23 crc kubenswrapper[4836]: E0217 14:29:23.659909 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" containerName="init" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.659916 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" containerName="init" Feb 17 14:29:23 crc kubenswrapper[4836]: E0217 14:29:23.660034 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" containerName="dnsmasq-dns" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.660069 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" containerName="dnsmasq-dns" Feb 17 14:29:23 crc kubenswrapper[4836]: E0217 14:29:23.660274 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerName="neutron-httpd" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.660284 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerName="neutron-httpd" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.664784 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerName="neutron-api" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.664849 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" containerName="neutron-httpd" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.664907 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b71acb-6b55-4f99-8b13-0c5aea065cbb" containerName="dnsmasq-dns" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.664936 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d3cbf1-d107-4004-9eec-698f8f4360b9" containerName="cloudkitty-proc" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.704544 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.708282 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 17 14:29:23 crc kubenswrapper[4836]: I0217 14:29:23.715328 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.066765 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-scripts\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.066856 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zk4\" (UniqueName: \"kubernetes.io/projected/79c00bb2-9487-433a-be90-07b6d885e4d0-kube-api-access-t4zk4\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.066902 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/79c00bb2-9487-433a-be90-07b6d885e4d0-certs\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.066930 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.066977 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.067245 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-config-data\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.168841 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-scripts\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.168945 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4zk4\" (UniqueName: \"kubernetes.io/projected/79c00bb2-9487-433a-be90-07b6d885e4d0-kube-api-access-t4zk4\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.168984 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/79c00bb2-9487-433a-be90-07b6d885e4d0-certs\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.169009 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.169044 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.169084 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-config-data\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.173690 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-config-data\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.174608 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.177103 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.178274 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/79c00bb2-9487-433a-be90-07b6d885e4d0-certs\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.181851 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79c00bb2-9487-433a-be90-07b6d885e4d0-scripts\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.195283 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4zk4\" (UniqueName: \"kubernetes.io/projected/79c00bb2-9487-433a-be90-07b6d885e4d0-kube-api-access-t4zk4\") pod \"cloudkitty-proc-0\" (UID: \"79c00bb2-9487-433a-be90-07b6d885e4d0\") " pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.352675 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.605614 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f74a60-5438-45cd-a8e1-74ccc1c3b16a" path="/var/lib/kubelet/pods/10f74a60-5438-45cd-a8e1-74ccc1c3b16a/volumes" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.607109 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d3cbf1-d107-4004-9eec-698f8f4360b9" path="/var/lib/kubelet/pods/39d3cbf1-d107-4004-9eec-698f8f4360b9/volumes" Feb 17 14:29:24 crc kubenswrapper[4836]: I0217 14:29:24.955152 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 14:29:24 crc kubenswrapper[4836]: W0217 14:29:24.972655 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79c00bb2_9487_433a_be90_07b6d885e4d0.slice/crio-037da9c804a575629b2d00187037dbbd47c8894e5055ae16baa77a9785db9029 WatchSource:0}: Error finding container 037da9c804a575629b2d00187037dbbd47c8894e5055ae16baa77a9785db9029: Status 404 returned error can't find the container with id 037da9c804a575629b2d00187037dbbd47c8894e5055ae16baa77a9785db9029 Feb 17 14:29:25 crc kubenswrapper[4836]: I0217 14:29:25.617666 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"79c00bb2-9487-433a-be90-07b6d885e4d0","Type":"ContainerStarted","Data":"4822625b39e69e126abd0c471117fa05d8239917395a393efe328d6f62d1df58"} Feb 17 14:29:25 crc kubenswrapper[4836]: I0217 14:29:25.619498 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"79c00bb2-9487-433a-be90-07b6d885e4d0","Type":"ContainerStarted","Data":"037da9c804a575629b2d00187037dbbd47c8894e5055ae16baa77a9785db9029"} Feb 17 14:29:25 crc kubenswrapper[4836]: I0217 14:29:25.650676 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.650655536 podStartE2EDuration="2.650655536s" podCreationTimestamp="2026-02-17 14:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:25.646779191 +0000 UTC m=+1391.989707470" watchObservedRunningTime="2026-02-17 14:29:25.650655536 +0000 UTC m=+1391.993583805" Feb 17 14:29:25 crc kubenswrapper[4836]: I0217 14:29:25.981614 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5d87f46c5f-vfn9f"] Feb 17 14:29:25 crc kubenswrapper[4836]: I0217 14:29:25.984457 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:25 crc kubenswrapper[4836]: I0217 14:29:25.988231 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 14:29:25 crc kubenswrapper[4836]: I0217 14:29:25.988444 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 14:29:25 crc kubenswrapper[4836]: I0217 14:29:25.988756 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.017705 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d87f46c5f-vfn9f"] Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.182813 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-config-data\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.182998 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a17ffb1e-09d2-4524-8c33-e50e15b9031d-run-httpd\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.183047 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqlh\" (UniqueName: \"kubernetes.io/projected/a17ffb1e-09d2-4524-8c33-e50e15b9031d-kube-api-access-knqlh\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.183147 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-public-tls-certs\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.183182 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a17ffb1e-09d2-4524-8c33-e50e15b9031d-log-httpd\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.183219 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a17ffb1e-09d2-4524-8c33-e50e15b9031d-etc-swift\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.183253 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-internal-tls-certs\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.183334 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-combined-ca-bundle\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.285659 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a17ffb1e-09d2-4524-8c33-e50e15b9031d-run-httpd\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.286001 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knqlh\" (UniqueName: \"kubernetes.io/projected/a17ffb1e-09d2-4524-8c33-e50e15b9031d-kube-api-access-knqlh\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.286179 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-public-tls-certs\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.286480 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a17ffb1e-09d2-4524-8c33-e50e15b9031d-log-httpd\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.286675 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a17ffb1e-09d2-4524-8c33-e50e15b9031d-etc-swift\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.287231 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-internal-tls-certs\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.287383 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-combined-ca-bundle\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.287518 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-config-data\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.286904 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a17ffb1e-09d2-4524-8c33-e50e15b9031d-log-httpd\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.299808 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a17ffb1e-09d2-4524-8c33-e50e15b9031d-run-httpd\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.302790 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-internal-tls-certs\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.302948 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-config-data\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.303088 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-combined-ca-bundle\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.307073 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17ffb1e-09d2-4524-8c33-e50e15b9031d-public-tls-certs\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.307378 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a17ffb1e-09d2-4524-8c33-e50e15b9031d-etc-swift\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.311969 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knqlh\" (UniqueName: \"kubernetes.io/projected/a17ffb1e-09d2-4524-8c33-e50e15b9031d-kube-api-access-knqlh\") pod \"swift-proxy-5d87f46c5f-vfn9f\" (UID: \"a17ffb1e-09d2-4524-8c33-e50e15b9031d\") " pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:26 crc kubenswrapper[4836]: I0217 14:29:26.610755 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.287653 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.294771 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="ceilometer-central-agent" containerID="cri-o://fb5cf9ee8d101cc6fae1fb5f79f35c27c28cf9fc0fa0631bc345f006efba64c9" gracePeriod=30 Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.295002 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="proxy-httpd" containerID="cri-o://c8b99a1c879fa5cc6c1f47dcb0736390ea6d8c4736e3f5b0bc65697ec35d7092" gracePeriod=30 Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.295056 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="sg-core" containerID="cri-o://b3a6771b8e2f194ac7bcb94abab4f0e58b19807bc132dac6154a588305752da0" gracePeriod=30 Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.295106 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="ceilometer-notification-agent" containerID="cri-o://3af1c0902b859232ebd0fe7d2dd1bbcfb19e56b6f4b1d314aace0818279210a3" gracePeriod=30 Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.706566 4836 generic.go:334] "Generic (PLEG): container finished" podID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerID="c8b99a1c879fa5cc6c1f47dcb0736390ea6d8c4736e3f5b0bc65697ec35d7092" exitCode=0 Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.706607 4836 generic.go:334] "Generic (PLEG): container finished" podID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerID="b3a6771b8e2f194ac7bcb94abab4f0e58b19807bc132dac6154a588305752da0" exitCode=2 Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.706633 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerDied","Data":"c8b99a1c879fa5cc6c1f47dcb0736390ea6d8c4736e3f5b0bc65697ec35d7092"} Feb 17 14:29:27 crc kubenswrapper[4836]: I0217 14:29:27.706666 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerDied","Data":"b3a6771b8e2f194ac7bcb94abab4f0e58b19807bc132dac6154a588305752da0"} Feb 17 14:29:28 crc kubenswrapper[4836]: I0217 14:29:28.738153 4836 generic.go:334] "Generic (PLEG): container finished" podID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerID="3af1c0902b859232ebd0fe7d2dd1bbcfb19e56b6f4b1d314aace0818279210a3" exitCode=0 Feb 17 14:29:28 crc kubenswrapper[4836]: I0217 14:29:28.738209 4836 generic.go:334] "Generic (PLEG): container finished" podID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerID="fb5cf9ee8d101cc6fae1fb5f79f35c27c28cf9fc0fa0631bc345f006efba64c9" exitCode=0 Feb 17 14:29:28 crc kubenswrapper[4836]: I0217 14:29:28.738242 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerDied","Data":"3af1c0902b859232ebd0fe7d2dd1bbcfb19e56b6f4b1d314aace0818279210a3"} Feb 17 14:29:28 crc kubenswrapper[4836]: I0217 14:29:28.738345 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerDied","Data":"fb5cf9ee8d101cc6fae1fb5f79f35c27c28cf9fc0fa0631bc345f006efba64c9"} Feb 17 14:29:29 crc kubenswrapper[4836]: I0217 14:29:29.765007 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:29:29 crc kubenswrapper[4836]: I0217 14:29:29.765446 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:29:35 crc kubenswrapper[4836]: E0217 14:29:35.676148 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 17 14:29:35 crc kubenswrapper[4836]: E0217 14:29:35.677548 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65h677h7bh89h55dh65hbh5fh5f4h5b7h66fh5bdh6ch95h5d6hc4h598h5fch54bh5h5f6h5d9hdh79h678h8dh95h55h5c5hdhd5h5b9q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8pml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(4fe674a8-c32b-412e-8d20-2a6e7e18bb10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:29:35 crc kubenswrapper[4836]: E0217 14:29:35.678836 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="4fe674a8-c32b-412e-8d20-2a6e7e18bb10" Feb 17 14:29:36 crc kubenswrapper[4836]: E0217 14:29:36.020456 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="4fe674a8-c32b-412e-8d20-2a6e7e18bb10" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.237496 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.395740 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d87f46c5f-vfn9f"] Feb 17 14:29:36 crc kubenswrapper[4836]: W0217 14:29:36.398988 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda17ffb1e_09d2_4524_8c33_e50e15b9031d.slice/crio-24bb742ba2025f263a0fd97d88da8e8a321ca187d115b21d9de3b0fa231a07a0 WatchSource:0}: Error finding container 24bb742ba2025f263a0fd97d88da8e8a321ca187d115b21d9de3b0fa231a07a0: Status 404 returned error can't find the container with id 24bb742ba2025f263a0fd97d88da8e8a321ca187d115b21d9de3b0fa231a07a0 Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.440004 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-combined-ca-bundle\") pod \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.440167 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7pcw\" (UniqueName: \"kubernetes.io/projected/d9f887f5-6ce0-4320-94fc-024b1b9ef725-kube-api-access-b7pcw\") pod \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.440317 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-log-httpd\") pod \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.441272 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d9f887f5-6ce0-4320-94fc-024b1b9ef725" (UID: "d9f887f5-6ce0-4320-94fc-024b1b9ef725"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.441522 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-sg-core-conf-yaml\") pod \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.442052 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-config-data\") pod \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.442102 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-scripts\") pod \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.442243 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-run-httpd\") pod \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\" (UID: \"d9f887f5-6ce0-4320-94fc-024b1b9ef725\") " Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.443158 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d9f887f5-6ce0-4320-94fc-024b1b9ef725" (UID: "d9f887f5-6ce0-4320-94fc-024b1b9ef725"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.443193 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.443441 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f887f5-6ce0-4320-94fc-024b1b9ef725-kube-api-access-b7pcw" (OuterVolumeSpecName: "kube-api-access-b7pcw") pod "d9f887f5-6ce0-4320-94fc-024b1b9ef725" (UID: "d9f887f5-6ce0-4320-94fc-024b1b9ef725"). InnerVolumeSpecName "kube-api-access-b7pcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.447997 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-scripts" (OuterVolumeSpecName: "scripts") pod "d9f887f5-6ce0-4320-94fc-024b1b9ef725" (UID: "d9f887f5-6ce0-4320-94fc-024b1b9ef725"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.489967 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d9f887f5-6ce0-4320-94fc-024b1b9ef725" (UID: "d9f887f5-6ce0-4320-94fc-024b1b9ef725"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.545258 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9f887f5-6ce0-4320-94fc-024b1b9ef725-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.545507 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7pcw\" (UniqueName: \"kubernetes.io/projected/d9f887f5-6ce0-4320-94fc-024b1b9ef725-kube-api-access-b7pcw\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.545615 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.545691 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.552944 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9f887f5-6ce0-4320-94fc-024b1b9ef725" (UID: "d9f887f5-6ce0-4320-94fc-024b1b9ef725"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.603499 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-config-data" (OuterVolumeSpecName: "config-data") pod "d9f887f5-6ce0-4320-94fc-024b1b9ef725" (UID: "d9f887f5-6ce0-4320-94fc-024b1b9ef725"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.648091 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:36 crc kubenswrapper[4836]: I0217 14:29:36.648128 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f887f5-6ce0-4320-94fc-024b1b9ef725-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.209206 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" event={"ID":"a17ffb1e-09d2-4524-8c33-e50e15b9031d","Type":"ContainerStarted","Data":"7821eb4bc638ddc9a8abc154e0b8520b0768fce182177f2c81c9157b5724d831"} Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.210522 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" event={"ID":"a17ffb1e-09d2-4524-8c33-e50e15b9031d","Type":"ContainerStarted","Data":"24bb742ba2025f263a0fd97d88da8e8a321ca187d115b21d9de3b0fa231a07a0"} Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.218828 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9f887f5-6ce0-4320-94fc-024b1b9ef725","Type":"ContainerDied","Data":"2b8910649b123c250a9b2ae2a0273df5052a76cf9ac3a4d666b31acdde9dcd6e"} Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.219042 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.219180 4836 scope.go:117] "RemoveContainer" containerID="c8b99a1c879fa5cc6c1f47dcb0736390ea6d8c4736e3f5b0bc65697ec35d7092" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.264069 4836 scope.go:117] "RemoveContainer" containerID="b3a6771b8e2f194ac7bcb94abab4f0e58b19807bc132dac6154a588305752da0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.265443 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.287481 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.309288 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:37 crc kubenswrapper[4836]: E0217 14:29:37.309896 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="ceilometer-central-agent" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.309918 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="ceilometer-central-agent" Feb 17 14:29:37 crc kubenswrapper[4836]: E0217 14:29:37.309948 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="ceilometer-notification-agent" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.309954 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="ceilometer-notification-agent" Feb 17 14:29:37 crc kubenswrapper[4836]: E0217 14:29:37.309982 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="sg-core" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.309988 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="sg-core" Feb 17 14:29:37 crc kubenswrapper[4836]: E0217 14:29:37.309999 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="proxy-httpd" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.310005 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="proxy-httpd" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.310220 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="ceilometer-notification-agent" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.310242 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="sg-core" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.310253 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="proxy-httpd" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.310265 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" containerName="ceilometer-central-agent" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.312998 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.321955 4836 scope.go:117] "RemoveContainer" containerID="3af1c0902b859232ebd0fe7d2dd1bbcfb19e56b6f4b1d314aace0818279210a3" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.325462 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.325684 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.351212 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.355774 4836 scope.go:117] "RemoveContainer" containerID="fb5cf9ee8d101cc6fae1fb5f79f35c27c28cf9fc0fa0631bc345f006efba64c9" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.365708 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-scripts\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.365773 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.365858 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-log-httpd\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.365915 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7ns\" (UniqueName: \"kubernetes.io/projected/56c9a452-ffd5-4b03-97a9-93546a194414-kube-api-access-pq7ns\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.366026 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-config-data\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.366096 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-run-httpd\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.366186 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.422384 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.422721 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerName="glance-log" containerID="cri-o://4231e0f0134e5c8db2d1379ad611e9d1ddd911c706b7c534c46f5a480fa7035b" gracePeriod=30 Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.423320 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerName="glance-httpd" containerID="cri-o://ffac93583d3a46218a79cd0eec11b0e9213bdce6e0622ee8ec1b1030a56cebbf" gracePeriod=30 Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.469243 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7ns\" (UniqueName: \"kubernetes.io/projected/56c9a452-ffd5-4b03-97a9-93546a194414-kube-api-access-pq7ns\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.469393 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-config-data\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.469462 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-run-httpd\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.469504 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.469614 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-scripts\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.469648 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.469697 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-log-httpd\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.470484 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-log-httpd\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.471432 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-run-httpd\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.478422 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.482557 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-config-data\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.483807 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-scripts\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.486208 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.497178 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7ns\" (UniqueName: \"kubernetes.io/projected/56c9a452-ffd5-4b03-97a9-93546a194414-kube-api-access-pq7ns\") pod \"ceilometer-0\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.648248 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.949252 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-q8wrd"] Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.951004 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.961149 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q8wrd"] Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.983231 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b1aa3a-dc15-4ec1-ba76-8246e300422f-operator-scripts\") pod \"nova-api-db-create-q8wrd\" (UID: \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\") " pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:37 crc kubenswrapper[4836]: I0217 14:29:37.983597 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2z4m\" (UniqueName: \"kubernetes.io/projected/88b1aa3a-dc15-4ec1-ba76-8246e300422f-kube-api-access-d2z4m\") pod \"nova-api-db-create-q8wrd\" (UID: \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\") " pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.133192 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2z4m\" (UniqueName: \"kubernetes.io/projected/88b1aa3a-dc15-4ec1-ba76-8246e300422f-kube-api-access-d2z4m\") pod \"nova-api-db-create-q8wrd\" (UID: \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\") " pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.133906 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b1aa3a-dc15-4ec1-ba76-8246e300422f-operator-scripts\") pod \"nova-api-db-create-q8wrd\" (UID: \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\") " pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.141398 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b1aa3a-dc15-4ec1-ba76-8246e300422f-operator-scripts\") pod \"nova-api-db-create-q8wrd\" (UID: \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\") " pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.175238 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2z4m\" (UniqueName: \"kubernetes.io/projected/88b1aa3a-dc15-4ec1-ba76-8246e300422f-kube-api-access-d2z4m\") pod \"nova-api-db-create-q8wrd\" (UID: \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\") " pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.192974 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-npl52"] Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.225431 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-npl52"] Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.232410 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.587171 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.600656 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db342a3d-55f5-4b0c-b96f-327014b6fb82-operator-scripts\") pod \"nova-cell0-db-create-npl52\" (UID: \"db342a3d-55f5-4b0c-b96f-327014b6fb82\") " pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.600738 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknvs\" (UniqueName: \"kubernetes.io/projected/db342a3d-55f5-4b0c-b96f-327014b6fb82-kube-api-access-hknvs\") pod \"nova-cell0-db-create-npl52\" (UID: \"db342a3d-55f5-4b0c-b96f-327014b6fb82\") " pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.606564 4836 generic.go:334] "Generic (PLEG): container finished" podID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerID="4231e0f0134e5c8db2d1379ad611e9d1ddd911c706b7c534c46f5a480fa7035b" exitCode=143 Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.634889 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f887f5-6ce0-4320-94fc-024b1b9ef725" path="/var/lib/kubelet/pods/d9f887f5-6ce0-4320-94fc-024b1b9ef725/volumes" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.639480 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c29f84b9-3879-4fc6-b2aa-e334bd08f24e","Type":"ContainerDied","Data":"4231e0f0134e5c8db2d1379ad611e9d1ddd911c706b7c534c46f5a480fa7035b"} Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.650988 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" event={"ID":"a17ffb1e-09d2-4524-8c33-e50e15b9031d","Type":"ContainerStarted","Data":"86a79ba10952e5f4ee8bb7f3e479555554f9c751b514251142b7ca704ac5d0dc"} Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.651413 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.651434 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.702698 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db342a3d-55f5-4b0c-b96f-327014b6fb82-operator-scripts\") pod \"nova-cell0-db-create-npl52\" (UID: \"db342a3d-55f5-4b0c-b96f-327014b6fb82\") " pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.702830 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hknvs\" (UniqueName: \"kubernetes.io/projected/db342a3d-55f5-4b0c-b96f-327014b6fb82-kube-api-access-hknvs\") pod \"nova-cell0-db-create-npl52\" (UID: \"db342a3d-55f5-4b0c-b96f-327014b6fb82\") " pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.704730 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db342a3d-55f5-4b0c-b96f-327014b6fb82-operator-scripts\") pod \"nova-cell0-db-create-npl52\" (UID: \"db342a3d-55f5-4b0c-b96f-327014b6fb82\") " pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.743840 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hknvs\" (UniqueName: \"kubernetes.io/projected/db342a3d-55f5-4b0c-b96f-327014b6fb82-kube-api-access-hknvs\") pod \"nova-cell0-db-create-npl52\" (UID: \"db342a3d-55f5-4b0c-b96f-327014b6fb82\") " pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.799585 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a7c4-account-create-update-qj5lb"] Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.801443 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.808644 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.827596 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a7c4-account-create-update-qj5lb"] Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.873480 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-5h5m9"] Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.876188 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.894107 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5h5m9"] Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.910232 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0312359b-98a6-49c7-83f1-fb44c679e8aa-operator-scripts\") pod \"nova-cell1-db-create-5h5m9\" (UID: \"0312359b-98a6-49c7-83f1-fb44c679e8aa\") " pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.910617 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsfl\" (UniqueName: \"kubernetes.io/projected/0312359b-98a6-49c7-83f1-fb44c679e8aa-kube-api-access-nmsfl\") pod \"nova-cell1-db-create-5h5m9\" (UID: \"0312359b-98a6-49c7-83f1-fb44c679e8aa\") " pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.910830 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-operator-scripts\") pod \"nova-api-a7c4-account-create-update-qj5lb\" (UID: \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\") " pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.910920 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2xjk\" (UniqueName: \"kubernetes.io/projected/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-kube-api-access-f2xjk\") pod \"nova-api-a7c4-account-create-update-qj5lb\" (UID: \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\") " pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.916257 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" podStartSLOduration=13.916213951 podStartE2EDuration="13.916213951s" podCreationTimestamp="2026-02-17 14:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:38.706321833 +0000 UTC m=+1405.049250102" watchObservedRunningTime="2026-02-17 14:29:38.916213951 +0000 UTC m=+1405.259142240" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.951798 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.968581 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8fba-account-create-update-gqd5n"] Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.970331 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.975817 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 14:29:38 crc kubenswrapper[4836]: I0217 14:29:38.979068 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8fba-account-create-update-gqd5n"] Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.012961 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8171da-ad25-4388-9dab-2afc19993d97-operator-scripts\") pod \"nova-cell0-8fba-account-create-update-gqd5n\" (UID: \"0b8171da-ad25-4388-9dab-2afc19993d97\") " pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.014169 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2xjk\" (UniqueName: \"kubernetes.io/projected/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-kube-api-access-f2xjk\") pod \"nova-api-a7c4-account-create-update-qj5lb\" (UID: \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\") " pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.014593 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bk6l\" (UniqueName: \"kubernetes.io/projected/0b8171da-ad25-4388-9dab-2afc19993d97-kube-api-access-9bk6l\") pod \"nova-cell0-8fba-account-create-update-gqd5n\" (UID: \"0b8171da-ad25-4388-9dab-2afc19993d97\") " pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.014781 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0312359b-98a6-49c7-83f1-fb44c679e8aa-operator-scripts\") pod \"nova-cell1-db-create-5h5m9\" (UID: \"0312359b-98a6-49c7-83f1-fb44c679e8aa\") " pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.014936 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsfl\" (UniqueName: \"kubernetes.io/projected/0312359b-98a6-49c7-83f1-fb44c679e8aa-kube-api-access-nmsfl\") pod \"nova-cell1-db-create-5h5m9\" (UID: \"0312359b-98a6-49c7-83f1-fb44c679e8aa\") " pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.032381 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-28f5-account-create-update-74tvm"] Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.035041 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.042513 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.064741 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-28f5-account-create-update-74tvm"] Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.073748 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-operator-scripts\") pod \"nova-api-a7c4-account-create-update-qj5lb\" (UID: \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\") " pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.074706 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0312359b-98a6-49c7-83f1-fb44c679e8aa-operator-scripts\") pod \"nova-cell1-db-create-5h5m9\" (UID: \"0312359b-98a6-49c7-83f1-fb44c679e8aa\") " pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.075859 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-operator-scripts\") pod \"nova-api-a7c4-account-create-update-qj5lb\" (UID: \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\") " pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.083639 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2xjk\" (UniqueName: \"kubernetes.io/projected/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-kube-api-access-f2xjk\") pod \"nova-api-a7c4-account-create-update-qj5lb\" (UID: \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\") " pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.122859 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsfl\" (UniqueName: \"kubernetes.io/projected/0312359b-98a6-49c7-83f1-fb44c679e8aa-kube-api-access-nmsfl\") pod \"nova-cell1-db-create-5h5m9\" (UID: \"0312359b-98a6-49c7-83f1-fb44c679e8aa\") " pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.152151 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.183759 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8171da-ad25-4388-9dab-2afc19993d97-operator-scripts\") pod \"nova-cell0-8fba-account-create-update-gqd5n\" (UID: \"0b8171da-ad25-4388-9dab-2afc19993d97\") " pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.183853 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc00367-2940-413d-872a-74d4fa37fc1f-operator-scripts\") pod \"nova-cell1-28f5-account-create-update-74tvm\" (UID: \"4dc00367-2940-413d-872a-74d4fa37fc1f\") " pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.183918 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbccw\" (UniqueName: \"kubernetes.io/projected/4dc00367-2940-413d-872a-74d4fa37fc1f-kube-api-access-dbccw\") pod \"nova-cell1-28f5-account-create-update-74tvm\" (UID: \"4dc00367-2940-413d-872a-74d4fa37fc1f\") " pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.183977 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bk6l\" (UniqueName: \"kubernetes.io/projected/0b8171da-ad25-4388-9dab-2afc19993d97-kube-api-access-9bk6l\") pod \"nova-cell0-8fba-account-create-update-gqd5n\" (UID: \"0b8171da-ad25-4388-9dab-2afc19993d97\") " pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.185520 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8171da-ad25-4388-9dab-2afc19993d97-operator-scripts\") pod \"nova-cell0-8fba-account-create-update-gqd5n\" (UID: \"0b8171da-ad25-4388-9dab-2afc19993d97\") " pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.213965 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bk6l\" (UniqueName: \"kubernetes.io/projected/0b8171da-ad25-4388-9dab-2afc19993d97-kube-api-access-9bk6l\") pod \"nova-cell0-8fba-account-create-update-gqd5n\" (UID: \"0b8171da-ad25-4388-9dab-2afc19993d97\") " pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.225442 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.285515 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc00367-2940-413d-872a-74d4fa37fc1f-operator-scripts\") pod \"nova-cell1-28f5-account-create-update-74tvm\" (UID: \"4dc00367-2940-413d-872a-74d4fa37fc1f\") " pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.285852 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbccw\" (UniqueName: \"kubernetes.io/projected/4dc00367-2940-413d-872a-74d4fa37fc1f-kube-api-access-dbccw\") pod \"nova-cell1-28f5-account-create-update-74tvm\" (UID: \"4dc00367-2940-413d-872a-74d4fa37fc1f\") " pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.287837 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc00367-2940-413d-872a-74d4fa37fc1f-operator-scripts\") pod \"nova-cell1-28f5-account-create-update-74tvm\" (UID: \"4dc00367-2940-413d-872a-74d4fa37fc1f\") " pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.315258 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbccw\" (UniqueName: \"kubernetes.io/projected/4dc00367-2940-413d-872a-74d4fa37fc1f-kube-api-access-dbccw\") pod \"nova-cell1-28f5-account-create-update-74tvm\" (UID: \"4dc00367-2940-413d-872a-74d4fa37fc1f\") " pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.332682 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.788337 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.922496 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:39 crc kubenswrapper[4836]: I0217 14:29:39.940113 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q8wrd"] Feb 17 14:29:40 crc kubenswrapper[4836]: W0217 14:29:40.008843 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b1aa3a_dc15_4ec1_ba76_8246e300422f.slice/crio-bb26936403352ce5d4b38858c684c231085b015394ef1a491b1db62d38cc94f0 WatchSource:0}: Error finding container bb26936403352ce5d4b38858c684c231085b015394ef1a491b1db62d38cc94f0: Status 404 returned error can't find the container with id bb26936403352ce5d4b38858c684c231085b015394ef1a491b1db62d38cc94f0 Feb 17 14:29:40 crc kubenswrapper[4836]: I0217 14:29:40.227652 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-npl52"] Feb 17 14:29:40 crc kubenswrapper[4836]: W0217 14:29:40.668327 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7d61f8c_4804_49b6_937e_fbaf20aa3ed2.slice/crio-a8bfacaf56b208729fed6ae7379213c44a5bf9bbc00aaa497d58947acfd5fda8 WatchSource:0}: Error finding container a8bfacaf56b208729fed6ae7379213c44a5bf9bbc00aaa497d58947acfd5fda8: Status 404 returned error can't find the container with id a8bfacaf56b208729fed6ae7379213c44a5bf9bbc00aaa497d58947acfd5fda8 Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.025848 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5h5m9"] Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.081839 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5h5m9" event={"ID":"0312359b-98a6-49c7-83f1-fb44c679e8aa","Type":"ContainerStarted","Data":"8aebd8b0cf09f0b5a71ad7edb46a57f5a3212f3d6f8147621e038f3b2d4a75eb"} Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.105675 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q8wrd" event={"ID":"88b1aa3a-dc15-4ec1-ba76-8246e300422f","Type":"ContainerStarted","Data":"e2428efba069899bf573bcb1f933d6f640083a8f0e4830cd36751b8b3332488d"} Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.106074 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q8wrd" event={"ID":"88b1aa3a-dc15-4ec1-ba76-8246e300422f","Type":"ContainerStarted","Data":"bb26936403352ce5d4b38858c684c231085b015394ef1a491b1db62d38cc94f0"} Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.122966 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerStarted","Data":"3732eb36b7746243f4a9bad758b1bf9afb106bf058ee751c3feddbab6042cb9c"} Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.151412 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a7c4-account-create-update-qj5lb"] Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.162060 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8fba-account-create-update-gqd5n"] Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.167728 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-npl52" event={"ID":"db342a3d-55f5-4b0c-b96f-327014b6fb82","Type":"ContainerStarted","Data":"7dba2d07908548962f40435efa50aed2a21f68c9f55a50ad39cc396d718c6cf2"} Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.167796 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-npl52" event={"ID":"db342a3d-55f5-4b0c-b96f-327014b6fb82","Type":"ContainerStarted","Data":"582663419ea06870f82c61e67b714be6e79694fd2b49d90ddf21ffdb14cf9940"} Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.191597 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" event={"ID":"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2","Type":"ContainerStarted","Data":"a8bfacaf56b208729fed6ae7379213c44a5bf9bbc00aaa497d58947acfd5fda8"} Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.219195 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-28f5-account-create-update-74tvm"] Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.261390 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.261785 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerName="glance-log" containerID="cri-o://2d37a99072f4fb6a9bc38dee8c6986d96ef5977cd1d2c3dca6d3d95cb5f3bcee" gracePeriod=30 Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.262555 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerName="glance-httpd" containerID="cri-o://253884c8bfee6f38dc03fef1da6c5e47b92d31a3b1592567360ef3f04d7144a9" gracePeriod=30 Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.281970 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-npl52" podStartSLOduration=3.281934907 podStartE2EDuration="3.281934907s" podCreationTimestamp="2026-02-17 14:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:41.20837493 +0000 UTC m=+1407.551303209" watchObservedRunningTime="2026-02-17 14:29:41.281934907 +0000 UTC m=+1407.624863176" Feb 17 14:29:41 crc kubenswrapper[4836]: I0217 14:29:41.630146 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.635159 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerStarted","Data":"d3bbefd170a172c21cb9f9e3cfad807a2c0bb5fe5338d9d264fb6ae4c6ff5de7"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.641411 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" event={"ID":"0b8171da-ad25-4388-9dab-2afc19993d97","Type":"ContainerStarted","Data":"4978da281b4ffb4cbde1dc06e973f40c67a116248d8a8623898e48ea004f575f"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.663972 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" event={"ID":"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2","Type":"ContainerStarted","Data":"940b27e8f09ea23f3f385f55c83e9233f241038d9dc1c8761036c1c3dbf2e000"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.692446 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5h5m9" event={"ID":"0312359b-98a6-49c7-83f1-fb44c679e8aa","Type":"ContainerStarted","Data":"66b9158b23020b3eaa0a3cea1af11df9fcdac6316e74751284cbec084e23c3a0"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.719913 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" podStartSLOduration=4.719883018 podStartE2EDuration="4.719883018s" podCreationTimestamp="2026-02-17 14:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:42.706776283 +0000 UTC m=+1409.049704552" watchObservedRunningTime="2026-02-17 14:29:42.719883018 +0000 UTC m=+1409.062811307" Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.721767 4836 generic.go:334] "Generic (PLEG): container finished" podID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerID="ffac93583d3a46218a79cd0eec11b0e9213bdce6e0622ee8ec1b1030a56cebbf" exitCode=0 Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.721944 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c29f84b9-3879-4fc6-b2aa-e334bd08f24e","Type":"ContainerDied","Data":"ffac93583d3a46218a79cd0eec11b0e9213bdce6e0622ee8ec1b1030a56cebbf"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.735241 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-28f5-account-create-update-74tvm" event={"ID":"4dc00367-2940-413d-872a-74d4fa37fc1f","Type":"ContainerStarted","Data":"684fea3361f7992d7677d58a81ef405045d31b021d431929ec2a4e0d9ce8e5bf"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.739884 4836 generic.go:334] "Generic (PLEG): container finished" podID="db342a3d-55f5-4b0c-b96f-327014b6fb82" containerID="7dba2d07908548962f40435efa50aed2a21f68c9f55a50ad39cc396d718c6cf2" exitCode=0 Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.740011 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-npl52" event={"ID":"db342a3d-55f5-4b0c-b96f-327014b6fb82","Type":"ContainerDied","Data":"7dba2d07908548962f40435efa50aed2a21f68c9f55a50ad39cc396d718c6cf2"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.747260 4836 generic.go:334] "Generic (PLEG): container finished" podID="88b1aa3a-dc15-4ec1-ba76-8246e300422f" containerID="e2428efba069899bf573bcb1f933d6f640083a8f0e4830cd36751b8b3332488d" exitCode=0 Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.747423 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q8wrd" event={"ID":"88b1aa3a-dc15-4ec1-ba76-8246e300422f","Type":"ContainerDied","Data":"e2428efba069899bf573bcb1f933d6f640083a8f0e4830cd36751b8b3332488d"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.773576 4836 generic.go:334] "Generic (PLEG): container finished" podID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerID="2d37a99072f4fb6a9bc38dee8c6986d96ef5977cd1d2c3dca6d3d95cb5f3bcee" exitCode=143 Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.773653 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fc032cb-3063-4e39-a91f-ccc89defe9c4","Type":"ContainerDied","Data":"2d37a99072f4fb6a9bc38dee8c6986d96ef5977cd1d2c3dca6d3d95cb5f3bcee"} Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.796449 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-5h5m9" podStartSLOduration=4.796421274 podStartE2EDuration="4.796421274s" podCreationTimestamp="2026-02-17 14:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:42.75921038 +0000 UTC m=+1409.102138639" watchObservedRunningTime="2026-02-17 14:29:42.796421274 +0000 UTC m=+1409.139349543" Feb 17 14:29:42 crc kubenswrapper[4836]: I0217 14:29:42.820750 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-28f5-account-create-update-74tvm" podStartSLOduration=4.820720211 podStartE2EDuration="4.820720211s" podCreationTimestamp="2026-02-17 14:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:42.805384497 +0000 UTC m=+1409.148312786" watchObservedRunningTime="2026-02-17 14:29:42.820720211 +0000 UTC m=+1409.163648490" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.769080 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.776164 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.791860 4836 generic.go:334] "Generic (PLEG): container finished" podID="4dc00367-2940-413d-872a-74d4fa37fc1f" containerID="b40337010298624b5f124e89e37fbded22f8ac5a672bad50ecf9c49dfa1ed535" exitCode=0 Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.792002 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-28f5-account-create-update-74tvm" event={"ID":"4dc00367-2940-413d-872a-74d4fa37fc1f","Type":"ContainerDied","Data":"b40337010298624b5f124e89e37fbded22f8ac5a672bad50ecf9c49dfa1ed535"} Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.795661 4836 generic.go:334] "Generic (PLEG): container finished" podID="c7d61f8c-4804-49b6-937e-fbaf20aa3ed2" containerID="940b27e8f09ea23f3f385f55c83e9233f241038d9dc1c8761036c1c3dbf2e000" exitCode=0 Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.795848 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" event={"ID":"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2","Type":"ContainerDied","Data":"940b27e8f09ea23f3f385f55c83e9233f241038d9dc1c8761036c1c3dbf2e000"} Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.800165 4836 generic.go:334] "Generic (PLEG): container finished" podID="0312359b-98a6-49c7-83f1-fb44c679e8aa" containerID="66b9158b23020b3eaa0a3cea1af11df9fcdac6316e74751284cbec084e23c3a0" exitCode=0 Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.800267 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5h5m9" event={"ID":"0312359b-98a6-49c7-83f1-fb44c679e8aa","Type":"ContainerDied","Data":"66b9158b23020b3eaa0a3cea1af11df9fcdac6316e74751284cbec084e23c3a0"} Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.803579 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q8wrd" event={"ID":"88b1aa3a-dc15-4ec1-ba76-8246e300422f","Type":"ContainerDied","Data":"bb26936403352ce5d4b38858c684c231085b015394ef1a491b1db62d38cc94f0"} Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.803610 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb26936403352ce5d4b38858c684c231085b015394ef1a491b1db62d38cc94f0" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.803704 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q8wrd" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.811644 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c29f84b9-3879-4fc6-b2aa-e334bd08f24e","Type":"ContainerDied","Data":"a73e6cf975755957f05fddc903522d5d75b3eb7f41eb5a42c5ad06b115f44634"} Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.811727 4836 scope.go:117] "RemoveContainer" containerID="ffac93583d3a46218a79cd0eec11b0e9213bdce6e0622ee8ec1b1030a56cebbf" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.811963 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.818686 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerStarted","Data":"dfd206b5463d3cc6f1e9888d9e21e49bdbfd5f95e1989f13c38e03bb00682c21"} Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.838836 4836 generic.go:334] "Generic (PLEG): container finished" podID="0b8171da-ad25-4388-9dab-2afc19993d97" containerID="a870dbadddedc2cd296e8c04a81b16817f6df39787b8061ee58f3dfc1fec3ca8" exitCode=0 Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.839135 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" event={"ID":"0b8171da-ad25-4388-9dab-2afc19993d97","Type":"ContainerDied","Data":"a870dbadddedc2cd296e8c04a81b16817f6df39787b8061ee58f3dfc1fec3ca8"} Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856499 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856611 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2z4m\" (UniqueName: \"kubernetes.io/projected/88b1aa3a-dc15-4ec1-ba76-8246e300422f-kube-api-access-d2z4m\") pod \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\" (UID: \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856641 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-httpd-run\") pod \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856689 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-logs\") pod \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856713 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-scripts\") pod \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856737 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-combined-ca-bundle\") pod \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856772 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cbj8\" (UniqueName: \"kubernetes.io/projected/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-kube-api-access-8cbj8\") pod \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856813 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b1aa3a-dc15-4ec1-ba76-8246e300422f-operator-scripts\") pod \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\" (UID: \"88b1aa3a-dc15-4ec1-ba76-8246e300422f\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856861 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-config-data\") pod \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.856950 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-public-tls-certs\") pod \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\" (UID: \"c29f84b9-3879-4fc6-b2aa-e334bd08f24e\") " Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.869975 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88b1aa3a-dc15-4ec1-ba76-8246e300422f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88b1aa3a-dc15-4ec1-ba76-8246e300422f" (UID: "88b1aa3a-dc15-4ec1-ba76-8246e300422f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.874808 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c29f84b9-3879-4fc6-b2aa-e334bd08f24e" (UID: "c29f84b9-3879-4fc6-b2aa-e334bd08f24e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.871874 4836 scope.go:117] "RemoveContainer" containerID="4231e0f0134e5c8db2d1379ad611e9d1ddd911c706b7c534c46f5a480fa7035b" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.882742 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-logs" (OuterVolumeSpecName: "logs") pod "c29f84b9-3879-4fc6-b2aa-e334bd08f24e" (UID: "c29f84b9-3879-4fc6-b2aa-e334bd08f24e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.915864 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-kube-api-access-8cbj8" (OuterVolumeSpecName: "kube-api-access-8cbj8") pod "c29f84b9-3879-4fc6-b2aa-e334bd08f24e" (UID: "c29f84b9-3879-4fc6-b2aa-e334bd08f24e"). InnerVolumeSpecName "kube-api-access-8cbj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.915990 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b1aa3a-dc15-4ec1-ba76-8246e300422f-kube-api-access-d2z4m" (OuterVolumeSpecName: "kube-api-access-d2z4m") pod "88b1aa3a-dc15-4ec1-ba76-8246e300422f" (UID: "88b1aa3a-dc15-4ec1-ba76-8246e300422f"). InnerVolumeSpecName "kube-api-access-d2z4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.931488 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-scripts" (OuterVolumeSpecName: "scripts") pod "c29f84b9-3879-4fc6-b2aa-e334bd08f24e" (UID: "c29f84b9-3879-4fc6-b2aa-e334bd08f24e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.943862 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c29f84b9-3879-4fc6-b2aa-e334bd08f24e" (UID: "c29f84b9-3879-4fc6-b2aa-e334bd08f24e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.961428 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2z4m\" (UniqueName: \"kubernetes.io/projected/88b1aa3a-dc15-4ec1-ba76-8246e300422f-kube-api-access-d2z4m\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.961478 4836 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.961498 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.961511 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.961521 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.961531 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cbj8\" (UniqueName: \"kubernetes.io/projected/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-kube-api-access-8cbj8\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:43 crc kubenswrapper[4836]: I0217 14:29:43.961548 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b1aa3a-dc15-4ec1-ba76-8246e300422f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.070386 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c29f84b9-3879-4fc6-b2aa-e334bd08f24e" (UID: "c29f84b9-3879-4fc6-b2aa-e334bd08f24e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.073040 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34" (OuterVolumeSpecName: "glance") pod "c29f84b9-3879-4fc6-b2aa-e334bd08f24e" (UID: "c29f84b9-3879-4fc6-b2aa-e334bd08f24e"). InnerVolumeSpecName "pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.134816 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-config-data" (OuterVolumeSpecName: "config-data") pod "c29f84b9-3879-4fc6-b2aa-e334bd08f24e" (UID: "c29f84b9-3879-4fc6-b2aa-e334bd08f24e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.154552 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.198106 4836 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") on node \"crc\" " Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.198152 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.198169 4836 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29f84b9-3879-4fc6-b2aa-e334bd08f24e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.247887 4836 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.248063 4836 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34") on node "crc" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.298870 4836 reconciler_common.go:293] "Volume detached for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.460152 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.983166 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db342a3d-55f5-4b0c-b96f-327014b6fb82-operator-scripts\") pod \"db342a3d-55f5-4b0c-b96f-327014b6fb82\" (UID: \"db342a3d-55f5-4b0c-b96f-327014b6fb82\") " Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.983273 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hknvs\" (UniqueName: \"kubernetes.io/projected/db342a3d-55f5-4b0c-b96f-327014b6fb82-kube-api-access-hknvs\") pod \"db342a3d-55f5-4b0c-b96f-327014b6fb82\" (UID: \"db342a3d-55f5-4b0c-b96f-327014b6fb82\") " Feb 17 14:29:44 crc kubenswrapper[4836]: I0217 14:29:44.995238 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db342a3d-55f5-4b0c-b96f-327014b6fb82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db342a3d-55f5-4b0c-b96f-327014b6fb82" (UID: "db342a3d-55f5-4b0c-b96f-327014b6fb82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.044940 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db342a3d-55f5-4b0c-b96f-327014b6fb82-kube-api-access-hknvs" (OuterVolumeSpecName: "kube-api-access-hknvs") pod "db342a3d-55f5-4b0c-b96f-327014b6fb82" (UID: "db342a3d-55f5-4b0c-b96f-327014b6fb82"). InnerVolumeSpecName "kube-api-access-hknvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.084833 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.086000 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db342a3d-55f5-4b0c-b96f-327014b6fb82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.086049 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hknvs\" (UniqueName: \"kubernetes.io/projected/db342a3d-55f5-4b0c-b96f-327014b6fb82-kube-api-access-hknvs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.135363 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-npl52" event={"ID":"db342a3d-55f5-4b0c-b96f-327014b6fb82","Type":"ContainerDied","Data":"582663419ea06870f82c61e67b714be6e79694fd2b49d90ddf21ffdb14cf9940"} Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.135415 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="582663419ea06870f82c61e67b714be6e79694fd2b49d90ddf21ffdb14cf9940" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.135529 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-npl52" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.162545 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.293039 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:29:45 crc kubenswrapper[4836]: E0217 14:29:45.294229 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerName="glance-log" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.294261 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerName="glance-log" Feb 17 14:29:45 crc kubenswrapper[4836]: E0217 14:29:45.294308 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b1aa3a-dc15-4ec1-ba76-8246e300422f" containerName="mariadb-database-create" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.294318 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b1aa3a-dc15-4ec1-ba76-8246e300422f" containerName="mariadb-database-create" Feb 17 14:29:45 crc kubenswrapper[4836]: E0217 14:29:45.294355 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerName="glance-httpd" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.294365 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerName="glance-httpd" Feb 17 14:29:45 crc kubenswrapper[4836]: E0217 14:29:45.294401 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db342a3d-55f5-4b0c-b96f-327014b6fb82" containerName="mariadb-database-create" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.294410 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="db342a3d-55f5-4b0c-b96f-327014b6fb82" containerName="mariadb-database-create" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.294683 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerName="glance-log" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.294718 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="db342a3d-55f5-4b0c-b96f-327014b6fb82" containerName="mariadb-database-create" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.294740 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" containerName="glance-httpd" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.294755 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b1aa3a-dc15-4ec1-ba76-8246e300422f" containerName="mariadb-database-create" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.302462 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.316641 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.317023 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.374383 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:29:45 crc kubenswrapper[4836]: E0217 14:29:45.438016 4836 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb342a3d_55f5_4b0c_b96f_327014b6fb82.slice\": RecentStats: unable to find data in memory cache]" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.512389 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-scripts\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.516420 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.516474 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcl4r\" (UniqueName: \"kubernetes.io/projected/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-kube-api-access-wcl4r\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.516605 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.516628 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.516725 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.516772 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-logs\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.517072 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-config-data\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.640195 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.640246 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.640353 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.640403 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-logs\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.640645 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-config-data\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.640848 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-scripts\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.640895 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.640922 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcl4r\" (UniqueName: \"kubernetes.io/projected/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-kube-api-access-wcl4r\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.646409 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-logs\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.646761 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.662906 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-config-data\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.670793 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-scripts\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.671544 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.677076 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcl4r\" (UniqueName: \"kubernetes.io/projected/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-kube-api-access-wcl4r\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.682250 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5121f0d-e93f-44c6-96b5-4ed7b6ec960e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.736842 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.737141 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1c05c143b5a67726d067625f4c5da25dac4624853da03b1088e3ef561519b77/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 14:29:45 crc kubenswrapper[4836]: I0217 14:29:45.826555 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a47ba0-1e72-4c37-bb61-5251f8f68b34\") pod \"glance-default-external-api-0\" (UID: \"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e\") " pod="openstack/glance-default-external-api-0" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.168069 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.266910 4836 generic.go:334] "Generic (PLEG): container finished" podID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerID="253884c8bfee6f38dc03fef1da6c5e47b92d31a3b1592567360ef3f04d7144a9" exitCode=0 Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.273679 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fc032cb-3063-4e39-a91f-ccc89defe9c4","Type":"ContainerDied","Data":"253884c8bfee6f38dc03fef1da6c5e47b92d31a3b1592567360ef3f04d7144a9"} Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.503013 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.625611 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29f84b9-3879-4fc6-b2aa-e334bd08f24e" path="/var/lib/kubelet/pods/c29f84b9-3879-4fc6-b2aa-e334bd08f24e/volumes" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.643360 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.666057 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.688782 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d87f46c5f-vfn9f" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.689014 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.690408 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bk6l\" (UniqueName: \"kubernetes.io/projected/0b8171da-ad25-4388-9dab-2afc19993d97-kube-api-access-9bk6l\") pod \"0b8171da-ad25-4388-9dab-2afc19993d97\" (UID: \"0b8171da-ad25-4388-9dab-2afc19993d97\") " Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.690606 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8171da-ad25-4388-9dab-2afc19993d97-operator-scripts\") pod \"0b8171da-ad25-4388-9dab-2afc19993d97\" (UID: \"0b8171da-ad25-4388-9dab-2afc19993d97\") " Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.691809 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b8171da-ad25-4388-9dab-2afc19993d97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b8171da-ad25-4388-9dab-2afc19993d97" (UID: "0b8171da-ad25-4388-9dab-2afc19993d97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.714088 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b8171da-ad25-4388-9dab-2afc19993d97-kube-api-access-9bk6l" (OuterVolumeSpecName: "kube-api-access-9bk6l") pod "0b8171da-ad25-4388-9dab-2afc19993d97" (UID: "0b8171da-ad25-4388-9dab-2afc19993d97"). InnerVolumeSpecName "kube-api-access-9bk6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.794471 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2xjk\" (UniqueName: \"kubernetes.io/projected/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-kube-api-access-f2xjk\") pod \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\" (UID: \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\") " Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.794707 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0312359b-98a6-49c7-83f1-fb44c679e8aa-operator-scripts\") pod \"0312359b-98a6-49c7-83f1-fb44c679e8aa\" (UID: \"0312359b-98a6-49c7-83f1-fb44c679e8aa\") " Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.794750 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-operator-scripts\") pod \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\" (UID: \"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2\") " Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.794828 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmsfl\" (UniqueName: \"kubernetes.io/projected/0312359b-98a6-49c7-83f1-fb44c679e8aa-kube-api-access-nmsfl\") pod \"0312359b-98a6-49c7-83f1-fb44c679e8aa\" (UID: \"0312359b-98a6-49c7-83f1-fb44c679e8aa\") " Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.794883 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc00367-2940-413d-872a-74d4fa37fc1f-operator-scripts\") pod \"4dc00367-2940-413d-872a-74d4fa37fc1f\" (UID: \"4dc00367-2940-413d-872a-74d4fa37fc1f\") " Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.794902 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbccw\" (UniqueName: \"kubernetes.io/projected/4dc00367-2940-413d-872a-74d4fa37fc1f-kube-api-access-dbccw\") pod \"4dc00367-2940-413d-872a-74d4fa37fc1f\" (UID: \"4dc00367-2940-413d-872a-74d4fa37fc1f\") " Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.795966 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bk6l\" (UniqueName: \"kubernetes.io/projected/0b8171da-ad25-4388-9dab-2afc19993d97-kube-api-access-9bk6l\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.795982 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8171da-ad25-4388-9dab-2afc19993d97-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.798031 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7d61f8c-4804-49b6-937e-fbaf20aa3ed2" (UID: "c7d61f8c-4804-49b6-937e-fbaf20aa3ed2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.798407 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0312359b-98a6-49c7-83f1-fb44c679e8aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0312359b-98a6-49c7-83f1-fb44c679e8aa" (UID: "0312359b-98a6-49c7-83f1-fb44c679e8aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.799438 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc00367-2940-413d-872a-74d4fa37fc1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4dc00367-2940-413d-872a-74d4fa37fc1f" (UID: "4dc00367-2940-413d-872a-74d4fa37fc1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.807676 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0312359b-98a6-49c7-83f1-fb44c679e8aa-kube-api-access-nmsfl" (OuterVolumeSpecName: "kube-api-access-nmsfl") pod "0312359b-98a6-49c7-83f1-fb44c679e8aa" (UID: "0312359b-98a6-49c7-83f1-fb44c679e8aa"). InnerVolumeSpecName "kube-api-access-nmsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.807714 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc00367-2940-413d-872a-74d4fa37fc1f-kube-api-access-dbccw" (OuterVolumeSpecName: "kube-api-access-dbccw") pod "4dc00367-2940-413d-872a-74d4fa37fc1f" (UID: "4dc00367-2940-413d-872a-74d4fa37fc1f"). InnerVolumeSpecName "kube-api-access-dbccw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.808197 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-kube-api-access-f2xjk" (OuterVolumeSpecName: "kube-api-access-f2xjk") pod "c7d61f8c-4804-49b6-937e-fbaf20aa3ed2" (UID: "c7d61f8c-4804-49b6-937e-fbaf20aa3ed2"). InnerVolumeSpecName "kube-api-access-f2xjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.901443 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0312359b-98a6-49c7-83f1-fb44c679e8aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.901478 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.901487 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmsfl\" (UniqueName: \"kubernetes.io/projected/0312359b-98a6-49c7-83f1-fb44c679e8aa-kube-api-access-nmsfl\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.901500 4836 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc00367-2940-413d-872a-74d4fa37fc1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.901508 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbccw\" (UniqueName: \"kubernetes.io/projected/4dc00367-2940-413d-872a-74d4fa37fc1f-kube-api-access-dbccw\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.901516 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2xjk\" (UniqueName: \"kubernetes.io/projected/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2-kube-api-access-f2xjk\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:46 crc kubenswrapper[4836]: I0217 14:29:46.995774 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.110223 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-internal-tls-certs\") pod \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.110364 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-combined-ca-bundle\") pod \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.110405 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-scripts\") pod \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.110451 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-config-data\") pod \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.110496 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-logs\") pod \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.110550 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-httpd-run\") pod \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.110716 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.110775 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6smd\" (UniqueName: \"kubernetes.io/projected/9fc032cb-3063-4e39-a91f-ccc89defe9c4-kube-api-access-x6smd\") pod \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\" (UID: \"9fc032cb-3063-4e39-a91f-ccc89defe9c4\") " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.113799 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-logs" (OuterVolumeSpecName: "logs") pod "9fc032cb-3063-4e39-a91f-ccc89defe9c4" (UID: "9fc032cb-3063-4e39-a91f-ccc89defe9c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.113996 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9fc032cb-3063-4e39-a91f-ccc89defe9c4" (UID: "9fc032cb-3063-4e39-a91f-ccc89defe9c4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.126552 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc032cb-3063-4e39-a91f-ccc89defe9c4-kube-api-access-x6smd" (OuterVolumeSpecName: "kube-api-access-x6smd") pod "9fc032cb-3063-4e39-a91f-ccc89defe9c4" (UID: "9fc032cb-3063-4e39-a91f-ccc89defe9c4"). InnerVolumeSpecName "kube-api-access-x6smd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.396966 4836 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.398155 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6smd\" (UniqueName: \"kubernetes.io/projected/9fc032cb-3063-4e39-a91f-ccc89defe9c4-kube-api-access-x6smd\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.398173 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fc032cb-3063-4e39-a91f-ccc89defe9c4-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.398329 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-scripts" (OuterVolumeSpecName: "scripts") pod "9fc032cb-3063-4e39-a91f-ccc89defe9c4" (UID: "9fc032cb-3063-4e39-a91f-ccc89defe9c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.430488 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fc032cb-3063-4e39-a91f-ccc89defe9c4" (UID: "9fc032cb-3063-4e39-a91f-ccc89defe9c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.445667 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf" (OuterVolumeSpecName: "glance") pod "9fc032cb-3063-4e39-a91f-ccc89defe9c4" (UID: "9fc032cb-3063-4e39-a91f-ccc89defe9c4"). InnerVolumeSpecName "pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.469706 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fc032cb-3063-4e39-a91f-ccc89defe9c4","Type":"ContainerDied","Data":"38f3541a8bef919fb1afd541589fd4540ccef699d3e6a2e7f1dcb0859f09ea45"} Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.469767 4836 scope.go:117] "RemoveContainer" containerID="253884c8bfee6f38dc03fef1da6c5e47b92d31a3b1592567360ef3f04d7144a9" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.470007 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.473788 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9fc032cb-3063-4e39-a91f-ccc89defe9c4" (UID: "9fc032cb-3063-4e39-a91f-ccc89defe9c4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.482130 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" event={"ID":"0b8171da-ad25-4388-9dab-2afc19993d97","Type":"ContainerDied","Data":"4978da281b4ffb4cbde1dc06e973f40c67a116248d8a8623898e48ea004f575f"} Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.482171 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4978da281b4ffb4cbde1dc06e973f40c67a116248d8a8623898e48ea004f575f" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.482315 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8fba-account-create-update-gqd5n" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.497772 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-28f5-account-create-update-74tvm" event={"ID":"4dc00367-2940-413d-872a-74d4fa37fc1f","Type":"ContainerDied","Data":"684fea3361f7992d7677d58a81ef405045d31b021d431929ec2a4e0d9ce8e5bf"} Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.497838 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="684fea3361f7992d7677d58a81ef405045d31b021d431929ec2a4e0d9ce8e5bf" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.498001 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-28f5-account-create-update-74tvm" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.504871 4836 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") on node \"crc\" " Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.504919 4836 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.504933 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.504946 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.510230 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" event={"ID":"c7d61f8c-4804-49b6-937e-fbaf20aa3ed2","Type":"ContainerDied","Data":"a8bfacaf56b208729fed6ae7379213c44a5bf9bbc00aaa497d58947acfd5fda8"} Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.510277 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8bfacaf56b208729fed6ae7379213c44a5bf9bbc00aaa497d58947acfd5fda8" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.510373 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a7c4-account-create-update-qj5lb" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.513824 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5h5m9" event={"ID":"0312359b-98a6-49c7-83f1-fb44c679e8aa","Type":"ContainerDied","Data":"8aebd8b0cf09f0b5a71ad7edb46a57f5a3212f3d6f8147621e038f3b2d4a75eb"} Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.513873 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aebd8b0cf09f0b5a71ad7edb46a57f5a3212f3d6f8147621e038f3b2d4a75eb" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.513966 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5h5m9" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.515002 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-config-data" (OuterVolumeSpecName: "config-data") pod "9fc032cb-3063-4e39-a91f-ccc89defe9c4" (UID: "9fc032cb-3063-4e39-a91f-ccc89defe9c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.528511 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerStarted","Data":"e9f32ae3116b957a3eb3e85cc5cb945cd5cac421baa8b9f3e186e90cea341d86"} Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.612037 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fc032cb-3063-4e39-a91f-ccc89defe9c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.666232 4836 scope.go:117] "RemoveContainer" containerID="2d37a99072f4fb6a9bc38dee8c6986d96ef5977cd1d2c3dca6d3d95cb5f3bcee" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.676142 4836 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.681567 4836 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf") on node "crc" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.726791 4836 reconciler_common.go:293] "Volume detached for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.783211 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.924433 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.953979 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.994734 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:29:47 crc kubenswrapper[4836]: E0217 14:29:47.995853 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d61f8c-4804-49b6-937e-fbaf20aa3ed2" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.995963 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d61f8c-4804-49b6-937e-fbaf20aa3ed2" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: E0217 14:29:47.996124 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerName="glance-log" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.996187 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerName="glance-log" Feb 17 14:29:47 crc kubenswrapper[4836]: E0217 14:29:47.996268 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0312359b-98a6-49c7-83f1-fb44c679e8aa" containerName="mariadb-database-create" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.996361 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="0312359b-98a6-49c7-83f1-fb44c679e8aa" containerName="mariadb-database-create" Feb 17 14:29:47 crc kubenswrapper[4836]: E0217 14:29:47.996477 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8171da-ad25-4388-9dab-2afc19993d97" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.996547 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8171da-ad25-4388-9dab-2afc19993d97" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: E0217 14:29:47.996614 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc00367-2940-413d-872a-74d4fa37fc1f" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.996671 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc00367-2940-413d-872a-74d4fa37fc1f" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: E0217 14:29:47.996728 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerName="glance-httpd" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.996808 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerName="glance-httpd" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.997172 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerName="glance-log" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.997246 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="0312359b-98a6-49c7-83f1-fb44c679e8aa" containerName="mariadb-database-create" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.997380 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d61f8c-4804-49b6-937e-fbaf20aa3ed2" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.997447 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc00367-2940-413d-872a-74d4fa37fc1f" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.997516 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" containerName="glance-httpd" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.997589 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b8171da-ad25-4388-9dab-2afc19993d97" containerName="mariadb-account-create-update" Feb 17 14:29:47 crc kubenswrapper[4836]: I0217 14:29:47.999531 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.002629 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.003170 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.010532 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.148256 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/172fadf8-99d3-436a-b711-010e8ffe289b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.148754 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.148811 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.148850 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.148884 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.149037 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/172fadf8-99d3-436a-b711-010e8ffe289b-logs\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.149110 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.149344 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d984t\" (UniqueName: \"kubernetes.io/projected/172fadf8-99d3-436a-b711-010e8ffe289b-kube-api-access-d984t\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.257925 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d984t\" (UniqueName: \"kubernetes.io/projected/172fadf8-99d3-436a-b711-010e8ffe289b-kube-api-access-d984t\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.258017 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/172fadf8-99d3-436a-b711-010e8ffe289b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.258061 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.258094 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.258125 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.258152 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.258214 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/172fadf8-99d3-436a-b711-010e8ffe289b-logs\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.258233 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.259514 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/172fadf8-99d3-436a-b711-010e8ffe289b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.261835 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/172fadf8-99d3-436a-b711-010e8ffe289b-logs\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.262580 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.264508 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.266571 4836 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.266596 4836 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20e9fd566d593755c515c6f55c386051b7cebe94721b27d85313d87ab22fcec4/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.267487 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.269028 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/172fadf8-99d3-436a-b711-010e8ffe289b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.280002 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d984t\" (UniqueName: \"kubernetes.io/projected/172fadf8-99d3-436a-b711-010e8ffe289b-kube-api-access-d984t\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.333368 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21ce1609-dbd7-400f-b0f4-c62fbe057ccf\") pod \"glance-default-internal-api-0\" (UID: \"172fadf8-99d3-436a-b711-010e8ffe289b\") " pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.371670 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:48 crc kubenswrapper[4836]: I0217 14:29:48.960939 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc032cb-3063-4e39-a91f-ccc89defe9c4" path="/var/lib/kubelet/pods/9fc032cb-3063-4e39-a91f-ccc89defe9c4/volumes" Feb 17 14:29:49 crc kubenswrapper[4836]: I0217 14:29:49.004310 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e","Type":"ContainerStarted","Data":"eee4041fb1838b993b2e57f0d04d074f1c54f5467ef071b6911929052f11a3ae"} Feb 17 14:29:49 crc kubenswrapper[4836]: I0217 14:29:49.578739 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-896gw"] Feb 17 14:29:49 crc kubenswrapper[4836]: I0217 14:29:49.595401 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:49 crc kubenswrapper[4836]: I0217 14:29:49.608885 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 14:29:49 crc kubenswrapper[4836]: I0217 14:29:49.609687 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4kxcj" Feb 17 14:29:49 crc kubenswrapper[4836]: I0217 14:29:49.609835 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.045815 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-896gw"] Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.109965 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-scripts\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.110192 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-config-data\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.110742 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.110853 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lng9\" (UniqueName: \"kubernetes.io/projected/5284ac65-3629-4b0f-94ce-114964fe6d15-kube-api-access-9lng9\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.119829 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerStarted","Data":"af4d958a142f654427a1232ba635917cf28fa5a94e024ffbc7ea40f1fda64c7a"} Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.120244 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="ceilometer-central-agent" containerID="cri-o://d3bbefd170a172c21cb9f9e3cfad807a2c0bb5fe5338d9d264fb6ae4c6ff5de7" gracePeriod=30 Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.125763 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="proxy-httpd" containerID="cri-o://af4d958a142f654427a1232ba635917cf28fa5a94e024ffbc7ea40f1fda64c7a" gracePeriod=30 Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.125837 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="sg-core" containerID="cri-o://e9f32ae3116b957a3eb3e85cc5cb945cd5cac421baa8b9f3e186e90cea341d86" gracePeriod=30 Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.125893 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="ceilometer-notification-agent" containerID="cri-o://dfd206b5463d3cc6f1e9888d9e21e49bdbfd5f95e1989f13c38e03bb00682c21" gracePeriod=30 Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.120819 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.174504 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.170128532 podStartE2EDuration="13.174475318s" podCreationTimestamp="2026-02-17 14:29:37 +0000 UTC" firstStartedPulling="2026-02-17 14:29:39.966639217 +0000 UTC m=+1406.309567486" lastFinishedPulling="2026-02-17 14:29:47.970986003 +0000 UTC m=+1414.313914272" observedRunningTime="2026-02-17 14:29:50.168922818 +0000 UTC m=+1416.511851087" watchObservedRunningTime="2026-02-17 14:29:50.174475318 +0000 UTC m=+1416.517403587" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.213179 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.213625 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lng9\" (UniqueName: \"kubernetes.io/projected/5284ac65-3629-4b0f-94ce-114964fe6d15-kube-api-access-9lng9\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.213669 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-scripts\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.213703 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-config-data\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.244462 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-config-data\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.245310 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-scripts\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.248755 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lng9\" (UniqueName: \"kubernetes.io/projected/5284ac65-3629-4b0f-94ce-114964fe6d15-kube-api-access-9lng9\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.262663 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-896gw\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:50 crc kubenswrapper[4836]: W0217 14:29:50.299197 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172fadf8_99d3_436a_b711_010e8ffe289b.slice/crio-179c39149c981ba659609d3f03303df1c786c03bd40758053a07f8ad2935bd68 WatchSource:0}: Error finding container 179c39149c981ba659609d3f03303df1c786c03bd40758053a07f8ad2935bd68: Status 404 returned error can't find the container with id 179c39149c981ba659609d3f03303df1c786c03bd40758053a07f8ad2935bd68 Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.299333 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 14:29:50 crc kubenswrapper[4836]: I0217 14:29:50.332714 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:29:51 crc kubenswrapper[4836]: I0217 14:29:51.025692 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-896gw"] Feb 17 14:29:51 crc kubenswrapper[4836]: W0217 14:29:51.037511 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5284ac65_3629_4b0f_94ce_114964fe6d15.slice/crio-9e6101c106f2eea30b241c7cb1b7e2d51a47ac641af556866c4a4cf6f00c0aad WatchSource:0}: Error finding container 9e6101c106f2eea30b241c7cb1b7e2d51a47ac641af556866c4a4cf6f00c0aad: Status 404 returned error can't find the container with id 9e6101c106f2eea30b241c7cb1b7e2d51a47ac641af556866c4a4cf6f00c0aad Feb 17 14:29:51 crc kubenswrapper[4836]: I0217 14:29:51.144661 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"172fadf8-99d3-436a-b711-010e8ffe289b","Type":"ContainerStarted","Data":"179c39149c981ba659609d3f03303df1c786c03bd40758053a07f8ad2935bd68"} Feb 17 14:29:51 crc kubenswrapper[4836]: I0217 14:29:51.153168 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-896gw" event={"ID":"5284ac65-3629-4b0f-94ce-114964fe6d15","Type":"ContainerStarted","Data":"9e6101c106f2eea30b241c7cb1b7e2d51a47ac641af556866c4a4cf6f00c0aad"} Feb 17 14:29:51 crc kubenswrapper[4836]: I0217 14:29:51.158091 4836 generic.go:334] "Generic (PLEG): container finished" podID="56c9a452-ffd5-4b03-97a9-93546a194414" containerID="af4d958a142f654427a1232ba635917cf28fa5a94e024ffbc7ea40f1fda64c7a" exitCode=0 Feb 17 14:29:51 crc kubenswrapper[4836]: I0217 14:29:51.158147 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerDied","Data":"af4d958a142f654427a1232ba635917cf28fa5a94e024ffbc7ea40f1fda64c7a"} Feb 17 14:29:52 crc kubenswrapper[4836]: I0217 14:29:52.205612 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"172fadf8-99d3-436a-b711-010e8ffe289b","Type":"ContainerStarted","Data":"16df76dbfff6e228f61eb7dd36f51ac2cc1e26c0fbe526656d756c8cd2c0e93e"} Feb 17 14:29:52 crc kubenswrapper[4836]: I0217 14:29:52.209358 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e","Type":"ContainerStarted","Data":"cfc49db59f8cb7aada23feb5e94f69f569dcdd81eb5540b251670502235191b8"} Feb 17 14:29:52 crc kubenswrapper[4836]: I0217 14:29:52.212245 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4fe674a8-c32b-412e-8d20-2a6e7e18bb10","Type":"ContainerStarted","Data":"46a4010331e50fdf7e24756f54d4faba9466e11dc0a570feeed789e0e0fe6807"} Feb 17 14:29:52 crc kubenswrapper[4836]: I0217 14:29:52.236120 4836 generic.go:334] "Generic (PLEG): container finished" podID="56c9a452-ffd5-4b03-97a9-93546a194414" containerID="e9f32ae3116b957a3eb3e85cc5cb945cd5cac421baa8b9f3e186e90cea341d86" exitCode=2 Feb 17 14:29:52 crc kubenswrapper[4836]: I0217 14:29:52.236164 4836 generic.go:334] "Generic (PLEG): container finished" podID="56c9a452-ffd5-4b03-97a9-93546a194414" containerID="dfd206b5463d3cc6f1e9888d9e21e49bdbfd5f95e1989f13c38e03bb00682c21" exitCode=0 Feb 17 14:29:52 crc kubenswrapper[4836]: I0217 14:29:52.236188 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerDied","Data":"e9f32ae3116b957a3eb3e85cc5cb945cd5cac421baa8b9f3e186e90cea341d86"} Feb 17 14:29:52 crc kubenswrapper[4836]: I0217 14:29:52.236225 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerDied","Data":"dfd206b5463d3cc6f1e9888d9e21e49bdbfd5f95e1989f13c38e03bb00682c21"} Feb 17 14:29:52 crc kubenswrapper[4836]: I0217 14:29:52.251681 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=6.811449014 podStartE2EDuration="44.251651241s" podCreationTimestamp="2026-02-17 14:29:08 +0000 UTC" firstStartedPulling="2026-02-17 14:29:10.694855327 +0000 UTC m=+1377.037783596" lastFinishedPulling="2026-02-17 14:29:48.135057554 +0000 UTC m=+1414.477985823" observedRunningTime="2026-02-17 14:29:52.239562156 +0000 UTC m=+1418.582490425" watchObservedRunningTime="2026-02-17 14:29:52.251651241 +0000 UTC m=+1418.594579530" Feb 17 14:29:53 crc kubenswrapper[4836]: I0217 14:29:53.257841 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"172fadf8-99d3-436a-b711-010e8ffe289b","Type":"ContainerStarted","Data":"f21a427ef140473387eb828643e5c1c1f5df7ae54ee3624be131f330b8f47e43"} Feb 17 14:29:53 crc kubenswrapper[4836]: I0217 14:29:53.266975 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b5121f0d-e93f-44c6-96b5-4ed7b6ec960e","Type":"ContainerStarted","Data":"e5f0909bf1bdf3c38ab94d209afa4c703fec8864b01634ec0cb8a2070cb29a63"} Feb 17 14:29:53 crc kubenswrapper[4836]: I0217 14:29:53.289551 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.28953086 podStartE2EDuration="6.28953086s" podCreationTimestamp="2026-02-17 14:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:53.286186729 +0000 UTC m=+1419.629114998" watchObservedRunningTime="2026-02-17 14:29:53.28953086 +0000 UTC m=+1419.632459129" Feb 17 14:29:53 crc kubenswrapper[4836]: I0217 14:29:53.326583 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.326531669 podStartE2EDuration="8.326531669s" podCreationTimestamp="2026-02-17 14:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:29:53.321417591 +0000 UTC m=+1419.664345860" watchObservedRunningTime="2026-02-17 14:29:53.326531669 +0000 UTC m=+1419.669459938" Feb 17 14:29:56 crc kubenswrapper[4836]: I0217 14:29:56.403424 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 14:29:56 crc kubenswrapper[4836]: I0217 14:29:56.405081 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 14:29:56 crc kubenswrapper[4836]: I0217 14:29:56.512812 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 14:29:56 crc kubenswrapper[4836]: I0217 14:29:56.533197 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 14:29:56 crc kubenswrapper[4836]: I0217 14:29:56.864536 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.194:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:29:56 crc kubenswrapper[4836]: I0217 14:29:56.892509 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.194:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:29:57 crc kubenswrapper[4836]: I0217 14:29:57.491909 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 14:29:57 crc kubenswrapper[4836]: I0217 14:29:57.491951 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 14:29:58 crc kubenswrapper[4836]: I0217 14:29:58.874486 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:58 crc kubenswrapper[4836]: I0217 14:29:58.930021 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:58 crc kubenswrapper[4836]: I0217 14:29:58.954615 4836 generic.go:334] "Generic (PLEG): container finished" podID="56c9a452-ffd5-4b03-97a9-93546a194414" containerID="d3bbefd170a172c21cb9f9e3cfad807a2c0bb5fe5338d9d264fb6ae4c6ff5de7" exitCode=0 Feb 17 14:29:58 crc kubenswrapper[4836]: I0217 14:29:58.955799 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerDied","Data":"d3bbefd170a172c21cb9f9e3cfad807a2c0bb5fe5338d9d264fb6ae4c6ff5de7"} Feb 17 14:29:58 crc kubenswrapper[4836]: I0217 14:29:58.972923 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.013193 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.485944 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.630847 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-combined-ca-bundle\") pod \"56c9a452-ffd5-4b03-97a9-93546a194414\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.631588 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-config-data\") pod \"56c9a452-ffd5-4b03-97a9-93546a194414\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.631681 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-scripts\") pod \"56c9a452-ffd5-4b03-97a9-93546a194414\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.631739 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-log-httpd\") pod \"56c9a452-ffd5-4b03-97a9-93546a194414\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.631889 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7ns\" (UniqueName: \"kubernetes.io/projected/56c9a452-ffd5-4b03-97a9-93546a194414-kube-api-access-pq7ns\") pod \"56c9a452-ffd5-4b03-97a9-93546a194414\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.631952 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-sg-core-conf-yaml\") pod \"56c9a452-ffd5-4b03-97a9-93546a194414\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.631979 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-run-httpd\") pod \"56c9a452-ffd5-4b03-97a9-93546a194414\" (UID: \"56c9a452-ffd5-4b03-97a9-93546a194414\") " Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.632693 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "56c9a452-ffd5-4b03-97a9-93546a194414" (UID: "56c9a452-ffd5-4b03-97a9-93546a194414"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.632901 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.633122 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "56c9a452-ffd5-4b03-97a9-93546a194414" (UID: "56c9a452-ffd5-4b03-97a9-93546a194414"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.640019 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-scripts" (OuterVolumeSpecName: "scripts") pod "56c9a452-ffd5-4b03-97a9-93546a194414" (UID: "56c9a452-ffd5-4b03-97a9-93546a194414"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.652476 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c9a452-ffd5-4b03-97a9-93546a194414-kube-api-access-pq7ns" (OuterVolumeSpecName: "kube-api-access-pq7ns") pod "56c9a452-ffd5-4b03-97a9-93546a194414" (UID: "56c9a452-ffd5-4b03-97a9-93546a194414"). InnerVolumeSpecName "kube-api-access-pq7ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.714453 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "56c9a452-ffd5-4b03-97a9-93546a194414" (UID: "56c9a452-ffd5-4b03-97a9-93546a194414"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.737133 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7ns\" (UniqueName: \"kubernetes.io/projected/56c9a452-ffd5-4b03-97a9-93546a194414-kube-api-access-pq7ns\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.737184 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.737198 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c9a452-ffd5-4b03-97a9-93546a194414-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.737208 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.765089 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.765364 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.813683 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56c9a452-ffd5-4b03-97a9-93546a194414" (UID: "56c9a452-ffd5-4b03-97a9-93546a194414"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.825131 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-config-data" (OuterVolumeSpecName: "config-data") pod "56c9a452-ffd5-4b03-97a9-93546a194414" (UID: "56c9a452-ffd5-4b03-97a9-93546a194414"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.839831 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.839892 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c9a452-ffd5-4b03-97a9-93546a194414-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.970185 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c9a452-ffd5-4b03-97a9-93546a194414","Type":"ContainerDied","Data":"3732eb36b7746243f4a9bad758b1bf9afb106bf058ee751c3feddbab6042cb9c"} Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.970252 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.970286 4836 scope.go:117] "RemoveContainer" containerID="af4d958a142f654427a1232ba635917cf28fa5a94e024ffbc7ea40f1fda64c7a" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.970450 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.970468 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.972644 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 14:29:59 crc kubenswrapper[4836]: I0217 14:29:59.972675 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.006241 4836 scope.go:117] "RemoveContainer" containerID="e9f32ae3116b957a3eb3e85cc5cb945cd5cac421baa8b9f3e186e90cea341d86" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.015016 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.030088 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.095213 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:00 crc kubenswrapper[4836]: E0217 14:30:00.097522 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="ceilometer-central-agent" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.097554 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="ceilometer-central-agent" Feb 17 14:30:00 crc kubenswrapper[4836]: E0217 14:30:00.097604 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="ceilometer-notification-agent" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.097612 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="ceilometer-notification-agent" Feb 17 14:30:00 crc kubenswrapper[4836]: E0217 14:30:00.097638 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="proxy-httpd" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.097645 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="proxy-httpd" Feb 17 14:30:00 crc kubenswrapper[4836]: E0217 14:30:00.097663 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="sg-core" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.097670 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="sg-core" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.098205 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="ceilometer-notification-agent" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.098255 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="ceilometer-central-agent" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.098276 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="proxy-httpd" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.098310 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" containerName="sg-core" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.098365 4836 scope.go:117] "RemoveContainer" containerID="dfd206b5463d3cc6f1e9888d9e21e49bdbfd5f95e1989f13c38e03bb00682c21" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.112276 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.119087 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.119759 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.141740 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.158651 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-scripts\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.158786 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfvn4\" (UniqueName: \"kubernetes.io/projected/3ff6c86e-b884-480e-b74b-30e4a586b5fa-kube-api-access-pfvn4\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.158818 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.159003 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-config-data\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.159068 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-log-httpd\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.159179 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-run-httpd\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.159236 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.200065 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.216394 4836 scope.go:117] "RemoveContainer" containerID="d3bbefd170a172c21cb9f9e3cfad807a2c0bb5fe5338d9d264fb6ae4c6ff5de7" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.251953 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:00 crc kubenswrapper[4836]: E0217 14:30:00.253153 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-pfvn4 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="3ff6c86e-b884-480e-b74b-30e4a586b5fa" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.261862 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-scripts\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.261976 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfvn4\" (UniqueName: \"kubernetes.io/projected/3ff6c86e-b884-480e-b74b-30e4a586b5fa-kube-api-access-pfvn4\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.261996 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.262086 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-config-data\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.262115 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-log-httpd\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.262177 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-run-httpd\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.262208 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.263525 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-log-httpd\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.264619 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-run-httpd\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.267928 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.273135 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-scripts\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.273245 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr"] Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.274158 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.276140 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.278553 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-config-data\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.280926 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.285327 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfvn4\" (UniqueName: \"kubernetes.io/projected/3ff6c86e-b884-480e-b74b-30e4a586b5fa-kube-api-access-pfvn4\") pod \"ceilometer-0\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " pod="openstack/ceilometer-0" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.285551 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.370261 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-secret-volume\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.370755 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-config-volume\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.371345 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cv9x\" (UniqueName: \"kubernetes.io/projected/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-kube-api-access-5cv9x\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.735787 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cv9x\" (UniqueName: \"kubernetes.io/projected/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-kube-api-access-5cv9x\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.735900 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-secret-volume\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.735969 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-config-volume\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.737461 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-config-volume\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.752498 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-secret-volume\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.778513 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cv9x\" (UniqueName: \"kubernetes.io/projected/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-kube-api-access-5cv9x\") pod \"collect-profiles-29522310-jpncr\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.799018 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c9a452-ffd5-4b03-97a9-93546a194414" path="/var/lib/kubelet/pods/56c9a452-ffd5-4b03-97a9-93546a194414/volumes" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.842882 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr"] Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.846111 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:00 crc kubenswrapper[4836]: I0217 14:30:00.991725 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.089601 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.261322 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-combined-ca-bundle\") pod \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.261455 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-sg-core-conf-yaml\") pod \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.261507 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfvn4\" (UniqueName: \"kubernetes.io/projected/3ff6c86e-b884-480e-b74b-30e4a586b5fa-kube-api-access-pfvn4\") pod \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.261568 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-scripts\") pod \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.261640 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-run-httpd\") pod \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.261664 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-log-httpd\") pod \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.261683 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-config-data\") pod \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\" (UID: \"3ff6c86e-b884-480e-b74b-30e4a586b5fa\") " Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.263641 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ff6c86e-b884-480e-b74b-30e4a586b5fa" (UID: "3ff6c86e-b884-480e-b74b-30e4a586b5fa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.263860 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ff6c86e-b884-480e-b74b-30e4a586b5fa" (UID: "3ff6c86e-b884-480e-b74b-30e4a586b5fa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.272799 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ff6c86e-b884-480e-b74b-30e4a586b5fa" (UID: "3ff6c86e-b884-480e-b74b-30e4a586b5fa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.272964 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ff6c86e-b884-480e-b74b-30e4a586b5fa" (UID: "3ff6c86e-b884-480e-b74b-30e4a586b5fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.278449 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-config-data" (OuterVolumeSpecName: "config-data") pod "3ff6c86e-b884-480e-b74b-30e4a586b5fa" (UID: "3ff6c86e-b884-480e-b74b-30e4a586b5fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.280545 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-scripts" (OuterVolumeSpecName: "scripts") pod "3ff6c86e-b884-480e-b74b-30e4a586b5fa" (UID: "3ff6c86e-b884-480e-b74b-30e4a586b5fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.284728 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff6c86e-b884-480e-b74b-30e4a586b5fa-kube-api-access-pfvn4" (OuterVolumeSpecName: "kube-api-access-pfvn4") pod "3ff6c86e-b884-480e-b74b-30e4a586b5fa" (UID: "3ff6c86e-b884-480e-b74b-30e4a586b5fa"). InnerVolumeSpecName "kube-api-access-pfvn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.366909 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfvn4\" (UniqueName: \"kubernetes.io/projected/3ff6c86e-b884-480e-b74b-30e4a586b5fa-kube-api-access-pfvn4\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.367281 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.367306 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.367318 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ff6c86e-b884-480e-b74b-30e4a586b5fa-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.367326 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.367336 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.367345 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ff6c86e-b884-480e-b74b-30e4a586b5fa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.564056 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr"] Feb 17 14:30:01 crc kubenswrapper[4836]: W0217 14:30:01.577138 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod576199c0_9d59_4a1d_bd1d_ec32eb8fac02.slice/crio-acc8635fbbc7bb6f57c5d71b815410418e47fa88d5bfcb304605c21d210e514a WatchSource:0}: Error finding container acc8635fbbc7bb6f57c5d71b815410418e47fa88d5bfcb304605c21d210e514a: Status 404 returned error can't find the container with id acc8635fbbc7bb6f57c5d71b815410418e47fa88d5bfcb304605c21d210e514a Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.644251 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 14:30:01 crc kubenswrapper[4836]: I0217 14:30:01.644355 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.044371 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.044871 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.046359 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" event={"ID":"576199c0-9d59-4a1d-bd1d-ec32eb8fac02","Type":"ContainerStarted","Data":"acc8635fbbc7bb6f57c5d71b815410418e47fa88d5bfcb304605c21d210e514a"} Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.046514 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.133686 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.150838 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.165489 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.168891 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.181219 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.181903 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.228517 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.301648 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-log-httpd\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.301710 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-run-httpd\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.301890 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnt8\" (UniqueName: \"kubernetes.io/projected/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-kube-api-access-wdnt8\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.302051 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-scripts\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.302264 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.302347 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-config-data\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.302423 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407014 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-config-data\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407085 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407157 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-log-httpd\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407178 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-run-httpd\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407311 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnt8\" (UniqueName: \"kubernetes.io/projected/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-kube-api-access-wdnt8\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407385 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-scripts\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407466 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407799 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-log-httpd\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.407849 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-run-httpd\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.415501 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-scripts\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.415749 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.415796 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-config-data\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.442062 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.450209 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnt8\" (UniqueName: \"kubernetes.io/projected/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-kube-api-access-wdnt8\") pod \"ceilometer-0\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.510538 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:02 crc kubenswrapper[4836]: I0217 14:30:02.587532 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff6c86e-b884-480e-b74b-30e4a586b5fa" path="/var/lib/kubelet/pods/3ff6c86e-b884-480e-b74b-30e4a586b5fa/volumes" Feb 17 14:30:03 crc kubenswrapper[4836]: I0217 14:30:03.381917 4836 generic.go:334] "Generic (PLEG): container finished" podID="576199c0-9d59-4a1d-bd1d-ec32eb8fac02" containerID="f70928b304ac14ef13a56d539a6d1c81f6a91cc1b5670ddde0c85a6fb06b84fe" exitCode=0 Feb 17 14:30:03 crc kubenswrapper[4836]: I0217 14:30:03.382001 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" event={"ID":"576199c0-9d59-4a1d-bd1d-ec32eb8fac02","Type":"ContainerDied","Data":"f70928b304ac14ef13a56d539a6d1c81f6a91cc1b5670ddde0c85a6fb06b84fe"} Feb 17 14:30:04 crc kubenswrapper[4836]: I0217 14:30:04.099916 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 14:30:04 crc kubenswrapper[4836]: I0217 14:30:04.100422 4836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 14:30:04 crc kubenswrapper[4836]: I0217 14:30:04.104737 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 14:30:04 crc kubenswrapper[4836]: I0217 14:30:04.431722 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.435559 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.455164 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cv9x\" (UniqueName: \"kubernetes.io/projected/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-kube-api-access-5cv9x\") pod \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.455269 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-secret-volume\") pod \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.455288 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-config-volume\") pod \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\" (UID: \"576199c0-9d59-4a1d-bd1d-ec32eb8fac02\") " Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.456569 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-config-volume" (OuterVolumeSpecName: "config-volume") pod "576199c0-9d59-4a1d-bd1d-ec32eb8fac02" (UID: "576199c0-9d59-4a1d-bd1d-ec32eb8fac02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.462663 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-kube-api-access-5cv9x" (OuterVolumeSpecName: "kube-api-access-5cv9x") pod "576199c0-9d59-4a1d-bd1d-ec32eb8fac02" (UID: "576199c0-9d59-4a1d-bd1d-ec32eb8fac02"). InnerVolumeSpecName "kube-api-access-5cv9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.465938 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "576199c0-9d59-4a1d-bd1d-ec32eb8fac02" (UID: "576199c0-9d59-4a1d-bd1d-ec32eb8fac02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.516892 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" event={"ID":"576199c0-9d59-4a1d-bd1d-ec32eb8fac02","Type":"ContainerDied","Data":"acc8635fbbc7bb6f57c5d71b815410418e47fa88d5bfcb304605c21d210e514a"} Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.517271 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acc8635fbbc7bb6f57c5d71b815410418e47fa88d5bfcb304605c21d210e514a" Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.517030 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-jpncr" Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.561845 4836 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.561893 4836 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:11 crc kubenswrapper[4836]: I0217 14:30:11.561906 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cv9x\" (UniqueName: \"kubernetes.io/projected/576199c0-9d59-4a1d-bd1d-ec32eb8fac02-kube-api-access-5cv9x\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:12 crc kubenswrapper[4836]: I0217 14:30:12.270868 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:12 crc kubenswrapper[4836]: W0217 14:30:12.272930 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca3b87dc_01ac_4a72_a432_9a4503f13c0b.slice/crio-53fd4772108969ad15e091e6f34ccf6c953cf089c9c05df86f2022f850caee6d WatchSource:0}: Error finding container 53fd4772108969ad15e091e6f34ccf6c953cf089c9c05df86f2022f850caee6d: Status 404 returned error can't find the container with id 53fd4772108969ad15e091e6f34ccf6c953cf089c9c05df86f2022f850caee6d Feb 17 14:30:12 crc kubenswrapper[4836]: I0217 14:30:12.533803 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerStarted","Data":"53fd4772108969ad15e091e6f34ccf6c953cf089c9c05df86f2022f850caee6d"} Feb 17 14:30:12 crc kubenswrapper[4836]: I0217 14:30:12.546724 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-896gw" event={"ID":"5284ac65-3629-4b0f-94ce-114964fe6d15","Type":"ContainerStarted","Data":"959d5cc1d8ba4d131ae83ee3b420db014e052fb98b3a6fa5c53753ae63d88003"} Feb 17 14:30:12 crc kubenswrapper[4836]: I0217 14:30:12.572354 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-896gw" podStartSLOduration=2.747300667 podStartE2EDuration="23.572332207s" podCreationTimestamp="2026-02-17 14:29:49 +0000 UTC" firstStartedPulling="2026-02-17 14:29:51.040835074 +0000 UTC m=+1417.383763343" lastFinishedPulling="2026-02-17 14:30:11.865866614 +0000 UTC m=+1438.208794883" observedRunningTime="2026-02-17 14:30:12.564613668 +0000 UTC m=+1438.907541947" watchObservedRunningTime="2026-02-17 14:30:12.572332207 +0000 UTC m=+1438.915260486" Feb 17 14:30:13 crc kubenswrapper[4836]: I0217 14:30:13.561182 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerStarted","Data":"f0419b2e3c8ef9c0f54a84e7512a9cde99f00cb5aa7b44a637e787be45f07ccd"} Feb 17 14:30:14 crc kubenswrapper[4836]: I0217 14:30:14.592858 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerStarted","Data":"d3297c8494404e0f55bc6c3d7032d9a3295e84dc803655d2e2df3e6ab7a747be"} Feb 17 14:30:15 crc kubenswrapper[4836]: I0217 14:30:15.609028 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerStarted","Data":"4337aced693eb74520c39cdaad50c2d06e723483b872e61eb2f707cc9550085e"} Feb 17 14:30:17 crc kubenswrapper[4836]: I0217 14:30:17.693383 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerStarted","Data":"fcda893980936a4e72f451c12f1e7a2007edb9f1324581557ec99a4e77ee81f9"} Feb 17 14:30:17 crc kubenswrapper[4836]: I0217 14:30:17.694094 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="ceilometer-central-agent" containerID="cri-o://f0419b2e3c8ef9c0f54a84e7512a9cde99f00cb5aa7b44a637e787be45f07ccd" gracePeriod=30 Feb 17 14:30:17 crc kubenswrapper[4836]: I0217 14:30:17.694225 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 14:30:17 crc kubenswrapper[4836]: I0217 14:30:17.694884 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="proxy-httpd" containerID="cri-o://fcda893980936a4e72f451c12f1e7a2007edb9f1324581557ec99a4e77ee81f9" gracePeriod=30 Feb 17 14:30:17 crc kubenswrapper[4836]: I0217 14:30:17.694956 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="sg-core" containerID="cri-o://4337aced693eb74520c39cdaad50c2d06e723483b872e61eb2f707cc9550085e" gracePeriod=30 Feb 17 14:30:17 crc kubenswrapper[4836]: I0217 14:30:17.695009 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="ceilometer-notification-agent" containerID="cri-o://d3297c8494404e0f55bc6c3d7032d9a3295e84dc803655d2e2df3e6ab7a747be" gracePeriod=30 Feb 17 14:30:18 crc kubenswrapper[4836]: I0217 14:30:18.711850 4836 generic.go:334] "Generic (PLEG): container finished" podID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerID="4337aced693eb74520c39cdaad50c2d06e723483b872e61eb2f707cc9550085e" exitCode=2 Feb 17 14:30:18 crc kubenswrapper[4836]: I0217 14:30:18.712426 4836 generic.go:334] "Generic (PLEG): container finished" podID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerID="d3297c8494404e0f55bc6c3d7032d9a3295e84dc803655d2e2df3e6ab7a747be" exitCode=0 Feb 17 14:30:18 crc kubenswrapper[4836]: I0217 14:30:18.712338 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerDied","Data":"4337aced693eb74520c39cdaad50c2d06e723483b872e61eb2f707cc9550085e"} Feb 17 14:30:18 crc kubenswrapper[4836]: I0217 14:30:18.713760 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerDied","Data":"d3297c8494404e0f55bc6c3d7032d9a3295e84dc803655d2e2df3e6ab7a747be"} Feb 17 14:30:20 crc kubenswrapper[4836]: I0217 14:30:20.158603 4836 generic.go:334] "Generic (PLEG): container finished" podID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerID="fcda893980936a4e72f451c12f1e7a2007edb9f1324581557ec99a4e77ee81f9" exitCode=0 Feb 17 14:30:20 crc kubenswrapper[4836]: I0217 14:30:20.158679 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerDied","Data":"fcda893980936a4e72f451c12f1e7a2007edb9f1324581557ec99a4e77ee81f9"} Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.123462 4836 generic.go:334] "Generic (PLEG): container finished" podID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerID="f0419b2e3c8ef9c0f54a84e7512a9cde99f00cb5aa7b44a637e787be45f07ccd" exitCode=0 Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.123697 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerDied","Data":"f0419b2e3c8ef9c0f54a84e7512a9cde99f00cb5aa7b44a637e787be45f07ccd"} Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.124438 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca3b87dc-01ac-4a72-a432-9a4503f13c0b","Type":"ContainerDied","Data":"53fd4772108969ad15e091e6f34ccf6c953cf089c9c05df86f2022f850caee6d"} Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.124472 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53fd4772108969ad15e091e6f34ccf6c953cf089c9c05df86f2022f850caee6d" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.148780 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.317374 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-log-httpd\") pod \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.317463 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-scripts\") pod \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.317551 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-sg-core-conf-yaml\") pod \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.317906 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-config-data\") pod \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.317949 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-combined-ca-bundle\") pod \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.318060 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdnt8\" (UniqueName: \"kubernetes.io/projected/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-kube-api-access-wdnt8\") pod \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.318183 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-run-httpd\") pod \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\" (UID: \"ca3b87dc-01ac-4a72-a432-9a4503f13c0b\") " Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.322766 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ca3b87dc-01ac-4a72-a432-9a4503f13c0b" (UID: "ca3b87dc-01ac-4a72-a432-9a4503f13c0b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.323761 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ca3b87dc-01ac-4a72-a432-9a4503f13c0b" (UID: "ca3b87dc-01ac-4a72-a432-9a4503f13c0b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.344650 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-kube-api-access-wdnt8" (OuterVolumeSpecName: "kube-api-access-wdnt8") pod "ca3b87dc-01ac-4a72-a432-9a4503f13c0b" (UID: "ca3b87dc-01ac-4a72-a432-9a4503f13c0b"). InnerVolumeSpecName "kube-api-access-wdnt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.399524 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-scripts" (OuterVolumeSpecName: "scripts") pod "ca3b87dc-01ac-4a72-a432-9a4503f13c0b" (UID: "ca3b87dc-01ac-4a72-a432-9a4503f13c0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.422618 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdnt8\" (UniqueName: \"kubernetes.io/projected/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-kube-api-access-wdnt8\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.422681 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.422690 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.422698 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.474538 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ca3b87dc-01ac-4a72-a432-9a4503f13c0b" (UID: "ca3b87dc-01ac-4a72-a432-9a4503f13c0b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.525280 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.591548 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca3b87dc-01ac-4a72-a432-9a4503f13c0b" (UID: "ca3b87dc-01ac-4a72-a432-9a4503f13c0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.608331 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-config-data" (OuterVolumeSpecName: "config-data") pod "ca3b87dc-01ac-4a72-a432-9a4503f13c0b" (UID: "ca3b87dc-01ac-4a72-a432-9a4503f13c0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.628784 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:28 crc kubenswrapper[4836]: I0217 14:30:28.628823 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b87dc-01ac-4a72-a432-9a4503f13c0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.137325 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.179362 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.196154 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.228094 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:29 crc kubenswrapper[4836]: E0217 14:30:29.228681 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="sg-core" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.228708 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="sg-core" Feb 17 14:30:29 crc kubenswrapper[4836]: E0217 14:30:29.228744 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576199c0-9d59-4a1d-bd1d-ec32eb8fac02" containerName="collect-profiles" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.228751 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="576199c0-9d59-4a1d-bd1d-ec32eb8fac02" containerName="collect-profiles" Feb 17 14:30:29 crc kubenswrapper[4836]: E0217 14:30:29.228766 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="ceilometer-notification-agent" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.228776 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="ceilometer-notification-agent" Feb 17 14:30:29 crc kubenswrapper[4836]: E0217 14:30:29.228790 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="ceilometer-central-agent" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.228798 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="ceilometer-central-agent" Feb 17 14:30:29 crc kubenswrapper[4836]: E0217 14:30:29.228828 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="proxy-httpd" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.228836 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="proxy-httpd" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.229093 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="576199c0-9d59-4a1d-bd1d-ec32eb8fac02" containerName="collect-profiles" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.229127 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="proxy-httpd" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.229140 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="ceilometer-central-agent" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.229162 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="sg-core" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.229176 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" containerName="ceilometer-notification-agent" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.231423 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.235736 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.236026 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.258579 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.345412 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-run-httpd\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.345521 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.345706 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.345801 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-log-httpd\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.346075 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-config-data\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.346419 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-scripts\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.346591 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7whp\" (UniqueName: \"kubernetes.io/projected/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-kube-api-access-l7whp\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.448508 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-run-httpd\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.448606 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.448655 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.448677 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-log-httpd\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.448721 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-config-data\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.448799 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-scripts\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.448853 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7whp\" (UniqueName: \"kubernetes.io/projected/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-kube-api-access-l7whp\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.449168 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-run-httpd\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.449478 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-log-httpd\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.455124 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.456447 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-config-data\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.461443 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-scripts\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.461680 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.467729 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7whp\" (UniqueName: \"kubernetes.io/projected/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-kube-api-access-l7whp\") pod \"ceilometer-0\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.602525 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.766397 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.766900 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.766979 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.769653 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c09fe81ffce38e5d9ef4195d8e69df0edfb238c5a8b73cb36be460e79dea4bb"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:30:29 crc kubenswrapper[4836]: I0217 14:30:29.770398 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://3c09fe81ffce38e5d9ef4195d8e69df0edfb238c5a8b73cb36be460e79dea4bb" gracePeriod=600 Feb 17 14:30:30 crc kubenswrapper[4836]: W0217 14:30:30.147942 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7977b0a7_fd9c_4d3c_bc21_fbf9d0e70506.slice/crio-f76c5ca2919dcba6a57d2c3c18620a0d364cbf08343d7aa01bc14bf17c1cc24c WatchSource:0}: Error finding container f76c5ca2919dcba6a57d2c3c18620a0d364cbf08343d7aa01bc14bf17c1cc24c: Status 404 returned error can't find the container with id f76c5ca2919dcba6a57d2c3c18620a0d364cbf08343d7aa01bc14bf17c1cc24c Feb 17 14:30:30 crc kubenswrapper[4836]: I0217 14:30:30.150166 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:30 crc kubenswrapper[4836]: I0217 14:30:30.172736 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="3c09fe81ffce38e5d9ef4195d8e69df0edfb238c5a8b73cb36be460e79dea4bb" exitCode=0 Feb 17 14:30:30 crc kubenswrapper[4836]: I0217 14:30:30.172802 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"3c09fe81ffce38e5d9ef4195d8e69df0edfb238c5a8b73cb36be460e79dea4bb"} Feb 17 14:30:30 crc kubenswrapper[4836]: I0217 14:30:30.172869 4836 scope.go:117] "RemoveContainer" containerID="790067b54b3531952a7756a09b793da1fc53330ef71b8011e59f530ae444594e" Feb 17 14:30:30 crc kubenswrapper[4836]: I0217 14:30:30.584120 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3b87dc-01ac-4a72-a432-9a4503f13c0b" path="/var/lib/kubelet/pods/ca3b87dc-01ac-4a72-a432-9a4503f13c0b/volumes" Feb 17 14:30:31 crc kubenswrapper[4836]: I0217 14:30:31.191811 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerStarted","Data":"2a94678a06d4de49c0c0b68a141d38b36f2fd3139243ed587746cebb8d0a09d9"} Feb 17 14:30:31 crc kubenswrapper[4836]: I0217 14:30:31.192243 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerStarted","Data":"f76c5ca2919dcba6a57d2c3c18620a0d364cbf08343d7aa01bc14bf17c1cc24c"} Feb 17 14:30:31 crc kubenswrapper[4836]: I0217 14:30:31.200979 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20"} Feb 17 14:30:32 crc kubenswrapper[4836]: I0217 14:30:32.213259 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerStarted","Data":"36e8022aac7122767c45efd486545780cddb57ef019acdaab3f1b9c40d6c965d"} Feb 17 14:30:32 crc kubenswrapper[4836]: I0217 14:30:32.308016 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:30:33 crc kubenswrapper[4836]: I0217 14:30:33.229401 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerStarted","Data":"c56beb0cbfb019f8f773e8f80f2e2999175246d8b61e325eddb4ad43a6b127d4"} Feb 17 14:30:33 crc kubenswrapper[4836]: I0217 14:30:33.232133 4836 generic.go:334] "Generic (PLEG): container finished" podID="5284ac65-3629-4b0f-94ce-114964fe6d15" containerID="959d5cc1d8ba4d131ae83ee3b420db014e052fb98b3a6fa5c53753ae63d88003" exitCode=0 Feb 17 14:30:33 crc kubenswrapper[4836]: I0217 14:30:33.232307 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-896gw" event={"ID":"5284ac65-3629-4b0f-94ce-114964fe6d15","Type":"ContainerDied","Data":"959d5cc1d8ba4d131ae83ee3b420db014e052fb98b3a6fa5c53753ae63d88003"} Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.124794 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.238399 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-combined-ca-bundle\") pod \"5284ac65-3629-4b0f-94ce-114964fe6d15\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.238550 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-config-data\") pod \"5284ac65-3629-4b0f-94ce-114964fe6d15\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.238732 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lng9\" (UniqueName: \"kubernetes.io/projected/5284ac65-3629-4b0f-94ce-114964fe6d15-kube-api-access-9lng9\") pod \"5284ac65-3629-4b0f-94ce-114964fe6d15\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.238997 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-scripts\") pod \"5284ac65-3629-4b0f-94ce-114964fe6d15\" (UID: \"5284ac65-3629-4b0f-94ce-114964fe6d15\") " Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.247235 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-scripts" (OuterVolumeSpecName: "scripts") pod "5284ac65-3629-4b0f-94ce-114964fe6d15" (UID: "5284ac65-3629-4b0f-94ce-114964fe6d15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.265203 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-896gw" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.265544 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-896gw" event={"ID":"5284ac65-3629-4b0f-94ce-114964fe6d15","Type":"ContainerDied","Data":"9e6101c106f2eea30b241c7cb1b7e2d51a47ac641af556866c4a4cf6f00c0aad"} Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.265676 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e6101c106f2eea30b241c7cb1b7e2d51a47ac641af556866c4a4cf6f00c0aad" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.268344 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5284ac65-3629-4b0f-94ce-114964fe6d15-kube-api-access-9lng9" (OuterVolumeSpecName: "kube-api-access-9lng9") pod "5284ac65-3629-4b0f-94ce-114964fe6d15" (UID: "5284ac65-3629-4b0f-94ce-114964fe6d15"). InnerVolumeSpecName "kube-api-access-9lng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.269574 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerStarted","Data":"0f482f5faddf89800b640dc5e78fd2ecafe8e9e7010c8aad8dada8307c95b71c"} Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.269828 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="ceilometer-central-agent" containerID="cri-o://2a94678a06d4de49c0c0b68a141d38b36f2fd3139243ed587746cebb8d0a09d9" gracePeriod=30 Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.269971 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.270475 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="proxy-httpd" containerID="cri-o://0f482f5faddf89800b640dc5e78fd2ecafe8e9e7010c8aad8dada8307c95b71c" gracePeriod=30 Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.270536 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="sg-core" containerID="cri-o://c56beb0cbfb019f8f773e8f80f2e2999175246d8b61e325eddb4ad43a6b127d4" gracePeriod=30 Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.270572 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="ceilometer-notification-agent" containerID="cri-o://36e8022aac7122767c45efd486545780cddb57ef019acdaab3f1b9c40d6c965d" gracePeriod=30 Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.316978 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-config-data" (OuterVolumeSpecName: "config-data") pod "5284ac65-3629-4b0f-94ce-114964fe6d15" (UID: "5284ac65-3629-4b0f-94ce-114964fe6d15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.318587 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5284ac65-3629-4b0f-94ce-114964fe6d15" (UID: "5284ac65-3629-4b0f-94ce-114964fe6d15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.324396 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.429066117 podStartE2EDuration="6.32436186s" podCreationTimestamp="2026-02-17 14:30:29 +0000 UTC" firstStartedPulling="2026-02-17 14:30:30.158684057 +0000 UTC m=+1456.501612326" lastFinishedPulling="2026-02-17 14:30:34.0539798 +0000 UTC m=+1460.396908069" observedRunningTime="2026-02-17 14:30:35.312655483 +0000 UTC m=+1461.655583772" watchObservedRunningTime="2026-02-17 14:30:35.32436186 +0000 UTC m=+1461.667290139" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.354225 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.354310 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.354328 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5284ac65-3629-4b0f-94ce-114964fe6d15-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.354342 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lng9\" (UniqueName: \"kubernetes.io/projected/5284ac65-3629-4b0f-94ce-114964fe6d15-kube-api-access-9lng9\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.411529 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 14:30:35 crc kubenswrapper[4836]: E0217 14:30:35.412069 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5284ac65-3629-4b0f-94ce-114964fe6d15" containerName="nova-cell0-conductor-db-sync" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.412090 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="5284ac65-3629-4b0f-94ce-114964fe6d15" containerName="nova-cell0-conductor-db-sync" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.412323 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="5284ac65-3629-4b0f-94ce-114964fe6d15" containerName="nova-cell0-conductor-db-sync" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.413091 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.436973 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.565644 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgv87\" (UniqueName: \"kubernetes.io/projected/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-kube-api-access-cgv87\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.566479 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.566608 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.670117 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgv87\" (UniqueName: \"kubernetes.io/projected/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-kube-api-access-cgv87\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.670705 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.670764 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.678578 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.681111 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.693256 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgv87\" (UniqueName: \"kubernetes.io/projected/00cffdcb-70af-415e-86a8-4f8eb7c0ba6f-kube-api-access-cgv87\") pod \"nova-cell0-conductor-0\" (UID: \"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.737025 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r5vl4"] Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.740082 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.765656 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r5vl4"] Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.787410 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.876111 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftf9s\" (UniqueName: \"kubernetes.io/projected/5d52263a-9417-43b6-903c-79e41b1200a0-kube-api-access-ftf9s\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.876997 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d52263a-9417-43b6-903c-79e41b1200a0-utilities\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.877214 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d52263a-9417-43b6-903c-79e41b1200a0-catalog-content\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.979733 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d52263a-9417-43b6-903c-79e41b1200a0-utilities\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.979821 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d52263a-9417-43b6-903c-79e41b1200a0-catalog-content\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.980313 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftf9s\" (UniqueName: \"kubernetes.io/projected/5d52263a-9417-43b6-903c-79e41b1200a0-kube-api-access-ftf9s\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.983887 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d52263a-9417-43b6-903c-79e41b1200a0-utilities\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:35 crc kubenswrapper[4836]: I0217 14:30:35.984015 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d52263a-9417-43b6-903c-79e41b1200a0-catalog-content\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.018883 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftf9s\" (UniqueName: \"kubernetes.io/projected/5d52263a-9417-43b6-903c-79e41b1200a0-kube-api-access-ftf9s\") pod \"redhat-operators-r5vl4\" (UID: \"5d52263a-9417-43b6-903c-79e41b1200a0\") " pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.133728 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.308484 4836 generic.go:334] "Generic (PLEG): container finished" podID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerID="0f482f5faddf89800b640dc5e78fd2ecafe8e9e7010c8aad8dada8307c95b71c" exitCode=0 Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.308542 4836 generic.go:334] "Generic (PLEG): container finished" podID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerID="c56beb0cbfb019f8f773e8f80f2e2999175246d8b61e325eddb4ad43a6b127d4" exitCode=2 Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.308551 4836 generic.go:334] "Generic (PLEG): container finished" podID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerID="36e8022aac7122767c45efd486545780cddb57ef019acdaab3f1b9c40d6c965d" exitCode=0 Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.308595 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerDied","Data":"0f482f5faddf89800b640dc5e78fd2ecafe8e9e7010c8aad8dada8307c95b71c"} Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.308663 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerDied","Data":"c56beb0cbfb019f8f773e8f80f2e2999175246d8b61e325eddb4ad43a6b127d4"} Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.308682 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerDied","Data":"36e8022aac7122767c45efd486545780cddb57ef019acdaab3f1b9c40d6c965d"} Feb 17 14:30:36 crc kubenswrapper[4836]: W0217 14:30:36.451200 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00cffdcb_70af_415e_86a8_4f8eb7c0ba6f.slice/crio-8490ed987ac63e5c963f1f64f86a48b7a33db4ee3520b61fb8dfa72cae6c38d4 WatchSource:0}: Error finding container 8490ed987ac63e5c963f1f64f86a48b7a33db4ee3520b61fb8dfa72cae6c38d4: Status 404 returned error can't find the container with id 8490ed987ac63e5c963f1f64f86a48b7a33db4ee3520b61fb8dfa72cae6c38d4 Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.455055 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 14:30:36 crc kubenswrapper[4836]: I0217 14:30:36.671774 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r5vl4"] Feb 17 14:30:37 crc kubenswrapper[4836]: I0217 14:30:37.325356 4836 generic.go:334] "Generic (PLEG): container finished" podID="5d52263a-9417-43b6-903c-79e41b1200a0" containerID="543693d067276811874d1e0bb4d0e4c0d0aa037b97569dbf9646ef328721db65" exitCode=0 Feb 17 14:30:37 crc kubenswrapper[4836]: I0217 14:30:37.325562 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5vl4" event={"ID":"5d52263a-9417-43b6-903c-79e41b1200a0","Type":"ContainerDied","Data":"543693d067276811874d1e0bb4d0e4c0d0aa037b97569dbf9646ef328721db65"} Feb 17 14:30:37 crc kubenswrapper[4836]: I0217 14:30:37.325606 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5vl4" event={"ID":"5d52263a-9417-43b6-903c-79e41b1200a0","Type":"ContainerStarted","Data":"3c5121268249f146eb49c77508540398c1fbd6e327b92f34e18793bdae5e01d9"} Feb 17 14:30:37 crc kubenswrapper[4836]: I0217 14:30:37.334542 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f","Type":"ContainerStarted","Data":"7a8ca6cfe91d443fa0ff1a509a189535b636308c6c14471f70f218c7b11dc7a1"} Feb 17 14:30:37 crc kubenswrapper[4836]: I0217 14:30:37.334620 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"00cffdcb-70af-415e-86a8-4f8eb7c0ba6f","Type":"ContainerStarted","Data":"8490ed987ac63e5c963f1f64f86a48b7a33db4ee3520b61fb8dfa72cae6c38d4"} Feb 17 14:30:37 crc kubenswrapper[4836]: I0217 14:30:37.335437 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:37 crc kubenswrapper[4836]: I0217 14:30:37.400044 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.400012544 podStartE2EDuration="2.400012544s" podCreationTimestamp="2026-02-17 14:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:30:37.380751353 +0000 UTC m=+1463.723679652" watchObservedRunningTime="2026-02-17 14:30:37.400012544 +0000 UTC m=+1463.742940813" Feb 17 14:30:45 crc kubenswrapper[4836]: I0217 14:30:45.837192 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.491244 4836 generic.go:334] "Generic (PLEG): container finished" podID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerID="2a94678a06d4de49c0c0b68a141d38b36f2fd3139243ed587746cebb8d0a09d9" exitCode=0 Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.491515 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerDied","Data":"2a94678a06d4de49c0c0b68a141d38b36f2fd3139243ed587746cebb8d0a09d9"} Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.596904 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lqvvn"] Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.598638 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.601513 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.607182 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lqvvn"] Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.609199 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.716911 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-config-data\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.716965 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2gc\" (UniqueName: \"kubernetes.io/projected/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-kube-api-access-5b2gc\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.718056 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-scripts\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.718142 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.826376 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-config-data\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.826471 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2gc\" (UniqueName: \"kubernetes.io/projected/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-kube-api-access-5b2gc\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.826952 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-scripts\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.827004 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.846403 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.847015 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-config-data\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.851921 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-scripts\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.906530 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.924909 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.926360 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2gc\" (UniqueName: \"kubernetes.io/projected/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-kube-api-access-5b2gc\") pod \"nova-cell0-cell-mapping-lqvvn\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.935005 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.938049 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n528b\" (UniqueName: \"kubernetes.io/projected/c429025c-a79e-425a-987a-773baaba5ef2-kube-api-access-n528b\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.938157 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-config-data\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.938191 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.938255 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c429025c-a79e-425a-987a-773baaba5ef2-logs\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:47 crc kubenswrapper[4836]: I0217 14:30:47.982538 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.038938 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.041184 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n528b\" (UniqueName: \"kubernetes.io/projected/c429025c-a79e-425a-987a-773baaba5ef2-kube-api-access-n528b\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.041294 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-config-data\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.041338 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.041424 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c429025c-a79e-425a-987a-773baaba5ef2-logs\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.053672 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.056653 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.058778 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-config-data\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.060836 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c429025c-a79e-425a-987a-773baaba5ef2-logs\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.075105 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.100752 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n528b\" (UniqueName: \"kubernetes.io/projected/c429025c-a79e-425a-987a-773baaba5ef2-kube-api-access-n528b\") pod \"nova-api-0\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.111821 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.143159 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6c45\" (UniqueName: \"kubernetes.io/projected/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-kube-api-access-w6c45\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.143243 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-config-data\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.143298 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.206197 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.214883 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.222635 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.236433 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.247340 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.252735 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-config-data\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.252892 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.252971 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czlhp\" (UniqueName: \"kubernetes.io/projected/3d6e757d-b7e9-417b-a63e-94879c7f3f74-kube-api-access-czlhp\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.253009 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6e757d-b7e9-417b-a63e-94879c7f3f74-logs\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.253097 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.253214 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6c45\" (UniqueName: \"kubernetes.io/projected/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-kube-api-access-w6c45\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.253242 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-config-data\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.259740 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.270552 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-config-data\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.288083 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6c45\" (UniqueName: \"kubernetes.io/projected/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-kube-api-access-w6c45\") pod \"nova-scheduler-0\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.319736 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.355774 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.355903 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-config-data\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.356054 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czlhp\" (UniqueName: \"kubernetes.io/projected/3d6e757d-b7e9-417b-a63e-94879c7f3f74-kube-api-access-czlhp\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.356090 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6e757d-b7e9-417b-a63e-94879c7f3f74-logs\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.356655 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6e757d-b7e9-417b-a63e-94879c7f3f74-logs\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.363265 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-config-data\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.368884 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.372267 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.385870 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.386269 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.394474 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czlhp\" (UniqueName: \"kubernetes.io/projected/3d6e757d-b7e9-417b-a63e-94879c7f3f74-kube-api-access-czlhp\") pod \"nova-metadata-0\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.466055 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69kdk\" (UniqueName: \"kubernetes.io/projected/73de5f3f-982c-4471-b91b-e3725da6be03-kube-api-access-69kdk\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.466452 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.466950 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.491866 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.519878 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.557419 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-cbrcp"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.559959 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570095 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570164 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-svc\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570196 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570350 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570383 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570405 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl67m\" (UniqueName: \"kubernetes.io/projected/d8b08728-c946-43e4-85fa-0b033034bd26-kube-api-access-rl67m\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570459 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69kdk\" (UniqueName: \"kubernetes.io/projected/73de5f3f-982c-4471-b91b-e3725da6be03-kube-api-access-69kdk\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570494 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-config\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.570512 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.579652 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.580009 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.610300 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69kdk\" (UniqueName: \"kubernetes.io/projected/73de5f3f-982c-4471-b91b-e3725da6be03-kube-api-access-69kdk\") pod \"nova-cell1-novncproxy-0\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.616455 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-cbrcp"] Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.653545 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.672096 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-config\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.672161 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.672246 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-svc\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.672270 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.672417 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.672446 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl67m\" (UniqueName: \"kubernetes.io/projected/d8b08728-c946-43e4-85fa-0b033034bd26-kube-api-access-rl67m\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.674442 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-svc\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.678127 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-config\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.680159 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.680386 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.680911 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.702488 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl67m\" (UniqueName: \"kubernetes.io/projected/d8b08728-c946-43e4-85fa-0b033034bd26-kube-api-access-rl67m\") pod \"dnsmasq-dns-78cd565959-cbrcp\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.722933 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:30:48 crc kubenswrapper[4836]: I0217 14:30:48.900053 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:30:52 crc kubenswrapper[4836]: I0217 14:30:52.630039 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:30:52 crc kubenswrapper[4836]: I0217 14:30:52.660587 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.872678 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bz94v"] Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.875418 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.879931 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.880247 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.905368 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bz94v"] Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.914757 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-config-data\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.914893 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn56p\" (UniqueName: \"kubernetes.io/projected/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-kube-api-access-hn56p\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.914970 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-scripts\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:54 crc kubenswrapper[4836]: I0217 14:30:54.915080 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.017412 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn56p\" (UniqueName: \"kubernetes.io/projected/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-kube-api-access-hn56p\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.017540 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-scripts\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.017658 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.017868 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-config-data\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.025092 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.027237 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-config-data\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.028128 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-scripts\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.044436 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn56p\" (UniqueName: \"kubernetes.io/projected/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-kube-api-access-hn56p\") pod \"nova-cell1-conductor-db-sync-bz94v\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:55 crc kubenswrapper[4836]: I0217 14:30:55.214072 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:30:59 crc kubenswrapper[4836]: E0217 14:30:59.238808 4836 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 14:30:59 crc kubenswrapper[4836]: E0217 14:30:59.239862 4836 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftf9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r5vl4_openshift-marketplace(5d52263a-9417-43b6-903c-79e41b1200a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 14:30:59 crc kubenswrapper[4836]: E0217 14:30:59.241119 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r5vl4" podUID="5d52263a-9417-43b6-903c-79e41b1200a0" Feb 17 14:30:59 crc kubenswrapper[4836]: E0217 14:30:59.741281 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-r5vl4" podUID="5d52263a-9417-43b6-903c-79e41b1200a0" Feb 17 14:30:59 crc kubenswrapper[4836]: I0217 14:30:59.984038 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.143985 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-combined-ca-bundle\") pod \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.144086 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-sg-core-conf-yaml\") pod \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.144192 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-config-data\") pod \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.144367 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-run-httpd\") pod \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.144384 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-scripts\") pod \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.144455 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-log-httpd\") pod \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.144513 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7whp\" (UniqueName: \"kubernetes.io/projected/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-kube-api-access-l7whp\") pod \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\" (UID: \"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506\") " Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.177276 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" (UID: "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.177642 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" (UID: "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.182955 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-kube-api-access-l7whp" (OuterVolumeSpecName: "kube-api-access-l7whp") pod "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" (UID: "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506"). InnerVolumeSpecName "kube-api-access-l7whp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.204037 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-scripts" (OuterVolumeSpecName: "scripts") pod "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" (UID: "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.225490 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" (UID: "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.248702 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.248761 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.248777 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.248788 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7whp\" (UniqueName: \"kubernetes.io/projected/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-kube-api-access-l7whp\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.248801 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.297011 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" (UID: "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.352767 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.358407 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.381705 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-config-data" (OuterVolumeSpecName: "config-data") pod "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" (UID: "7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.455544 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.794942 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bz94v"] Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.809661 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.810004 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506","Type":"ContainerDied","Data":"f76c5ca2919dcba6a57d2c3c18620a0d364cbf08343d7aa01bc14bf17c1cc24c"} Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.810751 4836 scope.go:117] "RemoveContainer" containerID="0f482f5faddf89800b640dc5e78fd2ecafe8e9e7010c8aad8dada8307c95b71c" Feb 17 14:31:00 crc kubenswrapper[4836]: W0217 14:31:00.823432 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c442bd_1a4d_4e8f_b3b2_c2e6c97faeed.slice/crio-7520be2580517ac140f3d2f437db810e791805bf994d997d530a396b328fa465 WatchSource:0}: Error finding container 7520be2580517ac140f3d2f437db810e791805bf994d997d530a396b328fa465: Status 404 returned error can't find the container with id 7520be2580517ac140f3d2f437db810e791805bf994d997d530a396b328fa465 Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.837639 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d6e757d-b7e9-417b-a63e-94879c7f3f74","Type":"ContainerStarted","Data":"b6f643b62e3a190c9d6903fa464d7953ef4da0c0399e9c74ba601860f623c445"} Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.852858 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.916776 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lqvvn"] Feb 17 14:31:00 crc kubenswrapper[4836]: I0217 14:31:00.995804 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.004528 4836 scope.go:117] "RemoveContainer" containerID="c56beb0cbfb019f8f773e8f80f2e2999175246d8b61e325eddb4ad43a6b127d4" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.047397 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.257585 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:01 crc kubenswrapper[4836]: E0217 14:31:01.258286 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="ceilometer-notification-agent" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.258333 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="ceilometer-notification-agent" Feb 17 14:31:01 crc kubenswrapper[4836]: E0217 14:31:01.258362 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="ceilometer-central-agent" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.258369 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="ceilometer-central-agent" Feb 17 14:31:01 crc kubenswrapper[4836]: E0217 14:31:01.258409 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="sg-core" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.258415 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="sg-core" Feb 17 14:31:01 crc kubenswrapper[4836]: E0217 14:31:01.258436 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="proxy-httpd" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.258442 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="proxy-httpd" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.258695 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="ceilometer-central-agent" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.258712 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="proxy-httpd" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.258736 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="sg-core" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.258747 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="ceilometer-notification-agent" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.271160 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.273121 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.277798 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.278377 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.476882 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-scripts\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.479938 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.480235 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qclhz\" (UniqueName: \"kubernetes.io/projected/68b35111-581a-4e2e-9fae-3e0248674655-kube-api-access-qclhz\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.480679 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.480862 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-log-httpd\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.481134 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-config-data\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.481353 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-run-httpd\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.550724 4836 scope.go:117] "RemoveContainer" containerID="36e8022aac7122767c45efd486545780cddb57ef019acdaab3f1b9c40d6c965d" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.555132 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.573755 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.584937 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.585017 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qclhz\" (UniqueName: \"kubernetes.io/projected/68b35111-581a-4e2e-9fae-3e0248674655-kube-api-access-qclhz\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.585107 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.585141 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-log-httpd\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.585164 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-config-data\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.585213 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-run-httpd\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.585233 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-scripts\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.590421 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-cbrcp"] Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.594905 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-scripts\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.595402 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-log-httpd\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.600387 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.600776 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-run-httpd\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.601168 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-config-data\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.619859 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.629780 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qclhz\" (UniqueName: \"kubernetes.io/projected/68b35111-581a-4e2e-9fae-3e0248674655-kube-api-access-qclhz\") pod \"ceilometer-0\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.734954 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.800381 4836 scope.go:117] "RemoveContainer" containerID="2a94678a06d4de49c0c0b68a141d38b36f2fd3139243ed587746cebb8d0a09d9" Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.866442 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bz94v" event={"ID":"790a788c-3cfe-49c8-b1ff-a83bcedf17e0","Type":"ContainerStarted","Data":"a2bfac90ff95a7b3434ec97c87c23bac5422abcf87cd41094de043f68aa74460"} Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.868874 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73de5f3f-982c-4471-b91b-e3725da6be03","Type":"ContainerStarted","Data":"73eb695666ad36f06c437207b1c5c555a1d75dfaa1f2995189afe99e230da0da"} Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.873700 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c429025c-a79e-425a-987a-773baaba5ef2","Type":"ContainerStarted","Data":"156a656ff70823725df87467a6303f3fbea750b4b0136e5d302f89802c6a5c93"} Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.878369 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed","Type":"ContainerStarted","Data":"7520be2580517ac140f3d2f437db810e791805bf994d997d530a396b328fa465"} Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.880888 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" event={"ID":"d8b08728-c946-43e4-85fa-0b033034bd26","Type":"ContainerStarted","Data":"4a4c76bc357a85c3013a688505b9be4f985a6e124e635443b51b48a2960c2a36"} Feb 17 14:31:01 crc kubenswrapper[4836]: I0217 14:31:01.883426 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lqvvn" event={"ID":"3f9d6a93-3d3a-4c5c-85cf-329209cfe911","Type":"ContainerStarted","Data":"2372be5c0fdb7f3175c7e34e83cf6164deb631b4fbf708d784b860e58c9b0a29"} Feb 17 14:31:02 crc kubenswrapper[4836]: I0217 14:31:02.473678 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:02 crc kubenswrapper[4836]: W0217 14:31:02.487606 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68b35111_581a_4e2e_9fae_3e0248674655.slice/crio-ae9ad801c19207ea76afd61356edd8b9b7b66ec00cfa25e818baaceb869c4ad6 WatchSource:0}: Error finding container ae9ad801c19207ea76afd61356edd8b9b7b66ec00cfa25e818baaceb869c4ad6: Status 404 returned error can't find the container with id ae9ad801c19207ea76afd61356edd8b9b7b66ec00cfa25e818baaceb869c4ad6 Feb 17 14:31:02 crc kubenswrapper[4836]: I0217 14:31:02.589165 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" path="/var/lib/kubelet/pods/7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506/volumes" Feb 17 14:31:02 crc kubenswrapper[4836]: I0217 14:31:02.904542 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerStarted","Data":"ae9ad801c19207ea76afd61356edd8b9b7b66ec00cfa25e818baaceb869c4ad6"} Feb 17 14:31:02 crc kubenswrapper[4836]: I0217 14:31:02.909396 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bz94v" event={"ID":"790a788c-3cfe-49c8-b1ff-a83bcedf17e0","Type":"ContainerStarted","Data":"9a55578dc34e67ce0a93dbbd5c5e496ed951f38d462ffb4dcccf5ec23897e1c5"} Feb 17 14:31:02 crc kubenswrapper[4836]: I0217 14:31:02.920322 4836 generic.go:334] "Generic (PLEG): container finished" podID="d8b08728-c946-43e4-85fa-0b033034bd26" containerID="10880f8e13f3f6efc6d19c175c05a63fc27f01501a301fd0a28b68afaa946ee2" exitCode=0 Feb 17 14:31:02 crc kubenswrapper[4836]: I0217 14:31:02.920455 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" event={"ID":"d8b08728-c946-43e4-85fa-0b033034bd26","Type":"ContainerDied","Data":"10880f8e13f3f6efc6d19c175c05a63fc27f01501a301fd0a28b68afaa946ee2"} Feb 17 14:31:02 crc kubenswrapper[4836]: I0217 14:31:02.930117 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lqvvn" event={"ID":"3f9d6a93-3d3a-4c5c-85cf-329209cfe911","Type":"ContainerStarted","Data":"c224cbe49994301a8cf7d7e85623916f9815d0873ee461d723b64e1a3b753f8d"} Feb 17 14:31:02 crc kubenswrapper[4836]: I0217 14:31:02.945717 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bz94v" podStartSLOduration=8.945682756 podStartE2EDuration="8.945682756s" podCreationTimestamp="2026-02-17 14:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:02.938466091 +0000 UTC m=+1489.281394370" watchObservedRunningTime="2026-02-17 14:31:02.945682756 +0000 UTC m=+1489.288611045" Feb 17 14:31:03 crc kubenswrapper[4836]: I0217 14:31:03.008446 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lqvvn" podStartSLOduration=16.00828115 podStartE2EDuration="16.00828115s" podCreationTimestamp="2026-02-17 14:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:02.986051298 +0000 UTC m=+1489.328979567" watchObservedRunningTime="2026-02-17 14:31:03.00828115 +0000 UTC m=+1489.351209419" Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.022837 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerStarted","Data":"061b81d1777bbf667556b595b5cc63218a0966b7aa3e7690e90b4b84a2173bed"} Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.034611 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d6e757d-b7e9-417b-a63e-94879c7f3f74","Type":"ContainerStarted","Data":"7b113b343450d9b94ea6adbfe150b4f215416e1008c8bb123e85d6f538396d6e"} Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.034669 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d6e757d-b7e9-417b-a63e-94879c7f3f74","Type":"ContainerStarted","Data":"376eec79f26ee001240c9935be991e0037ae7235786eba961156abcccae1aa84"} Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.034829 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerName="nova-metadata-log" containerID="cri-o://376eec79f26ee001240c9935be991e0037ae7235786eba961156abcccae1aa84" gracePeriod=30 Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.035420 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerName="nova-metadata-metadata" containerID="cri-o://7b113b343450d9b94ea6adbfe150b4f215416e1008c8bb123e85d6f538396d6e" gracePeriod=30 Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.046393 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="73de5f3f-982c-4471-b91b-e3725da6be03" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://62c716ef584f99d01d3c8b6a75eeb1c7c51deabb2ccc02590847f8e3c96e8ddb" gracePeriod=30 Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.046451 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73de5f3f-982c-4471-b91b-e3725da6be03","Type":"ContainerStarted","Data":"62c716ef584f99d01d3c8b6a75eeb1c7c51deabb2ccc02590847f8e3c96e8ddb"} Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.059982 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c429025c-a79e-425a-987a-773baaba5ef2","Type":"ContainerStarted","Data":"629650e8b16a63777bc8cae654dc487e384a65773313b5678caf2189bce80c69"} Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.060083 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c429025c-a79e-425a-987a-773baaba5ef2","Type":"ContainerStarted","Data":"714f99461e2735815fecf65f89d7affa2f50ecfc1ef8586cdf0229b12c705175"} Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.067821 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed","Type":"ContainerStarted","Data":"bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef"} Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.069803 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=14.583836704 podStartE2EDuration="20.0697675s" podCreationTimestamp="2026-02-17 14:30:47 +0000 UTC" firstStartedPulling="2026-02-17 14:31:00.355879851 +0000 UTC m=+1486.698808120" lastFinishedPulling="2026-02-17 14:31:05.841810647 +0000 UTC m=+1492.184738916" observedRunningTime="2026-02-17 14:31:07.06238526 +0000 UTC m=+1493.405313529" watchObservedRunningTime="2026-02-17 14:31:07.0697675 +0000 UTC m=+1493.412695769" Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.088005 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" event={"ID":"d8b08728-c946-43e4-85fa-0b033034bd26","Type":"ContainerStarted","Data":"6c739c83cd6c60eccf82cdc83958244ae182a579d8b46273f0f3fb2234b691ec"} Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.089745 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.097211 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=15.883125737 podStartE2EDuration="20.097172494s" podCreationTimestamp="2026-02-17 14:30:47 +0000 UTC" firstStartedPulling="2026-02-17 14:31:01.644706779 +0000 UTC m=+1487.987635048" lastFinishedPulling="2026-02-17 14:31:05.858753536 +0000 UTC m=+1492.201681805" observedRunningTime="2026-02-17 14:31:07.090057091 +0000 UTC m=+1493.432985380" watchObservedRunningTime="2026-02-17 14:31:07.097172494 +0000 UTC m=+1493.440100763" Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.116262 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=14.872917693 podStartE2EDuration="19.116232531s" podCreationTimestamp="2026-02-17 14:30:48 +0000 UTC" firstStartedPulling="2026-02-17 14:31:01.598497029 +0000 UTC m=+1487.941425298" lastFinishedPulling="2026-02-17 14:31:05.841811857 +0000 UTC m=+1492.184740136" observedRunningTime="2026-02-17 14:31:07.114476193 +0000 UTC m=+1493.457404482" watchObservedRunningTime="2026-02-17 14:31:07.116232531 +0000 UTC m=+1493.459160820" Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.156148 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" podStartSLOduration=19.156108682 podStartE2EDuration="19.156108682s" podCreationTimestamp="2026-02-17 14:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:07.145112513 +0000 UTC m=+1493.488040802" watchObservedRunningTime="2026-02-17 14:31:07.156108682 +0000 UTC m=+1493.499036951" Feb 17 14:31:07 crc kubenswrapper[4836]: I0217 14:31:07.183688 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=15.200295449 podStartE2EDuration="20.183654309s" podCreationTimestamp="2026-02-17 14:30:47 +0000 UTC" firstStartedPulling="2026-02-17 14:31:00.866970678 +0000 UTC m=+1487.209898947" lastFinishedPulling="2026-02-17 14:31:05.850329528 +0000 UTC m=+1492.193257807" observedRunningTime="2026-02-17 14:31:07.169095564 +0000 UTC m=+1493.512023843" watchObservedRunningTime="2026-02-17 14:31:07.183654309 +0000 UTC m=+1493.526582568" Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.113597 4836 generic.go:334] "Generic (PLEG): container finished" podID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerID="376eec79f26ee001240c9935be991e0037ae7235786eba961156abcccae1aa84" exitCode=143 Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.113825 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d6e757d-b7e9-417b-a63e-94879c7f3f74","Type":"ContainerDied","Data":"376eec79f26ee001240c9935be991e0037ae7235786eba961156abcccae1aa84"} Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.121745 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerStarted","Data":"f38b096e0cb174cc64328d45e0488ca7d9888e5f9a99f36292e400cf1d4fd13d"} Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.321632 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.321741 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.521149 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.521238 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.559125 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.654541 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.654636 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 14:31:08 crc kubenswrapper[4836]: I0217 14:31:08.724163 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:09 crc kubenswrapper[4836]: I0217 14:31:09.141327 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerStarted","Data":"b296d87fd7976e77f9e6129f3c2eff1cfb26664d576e803cadb9560994f76f7d"} Feb 17 14:31:09 crc kubenswrapper[4836]: I0217 14:31:09.187972 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 14:31:09 crc kubenswrapper[4836]: I0217 14:31:09.403663 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 14:31:09 crc kubenswrapper[4836]: I0217 14:31:09.404154 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 14:31:10 crc kubenswrapper[4836]: I0217 14:31:10.157056 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerStarted","Data":"bd01937315b50abe24d9b50b0b169dd1610243cceeeeb3bd7fb5017e3522ad76"} Feb 17 14:31:10 crc kubenswrapper[4836]: I0217 14:31:10.213558 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.425666968 podStartE2EDuration="10.213531733s" podCreationTimestamp="2026-02-17 14:31:00 +0000 UTC" firstStartedPulling="2026-02-17 14:31:02.504714586 +0000 UTC m=+1488.847642865" lastFinishedPulling="2026-02-17 14:31:09.292579361 +0000 UTC m=+1495.635507630" observedRunningTime="2026-02-17 14:31:10.209606746 +0000 UTC m=+1496.552535025" watchObservedRunningTime="2026-02-17 14:31:10.213531733 +0000 UTC m=+1496.556460002" Feb 17 14:31:11 crc kubenswrapper[4836]: I0217 14:31:11.177831 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 14:31:13 crc kubenswrapper[4836]: I0217 14:31:13.202188 4836 generic.go:334] "Generic (PLEG): container finished" podID="790a788c-3cfe-49c8-b1ff-a83bcedf17e0" containerID="9a55578dc34e67ce0a93dbbd5c5e496ed951f38d462ffb4dcccf5ec23897e1c5" exitCode=0 Feb 17 14:31:13 crc kubenswrapper[4836]: I0217 14:31:13.202698 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bz94v" event={"ID":"790a788c-3cfe-49c8-b1ff-a83bcedf17e0","Type":"ContainerDied","Data":"9a55578dc34e67ce0a93dbbd5c5e496ed951f38d462ffb4dcccf5ec23897e1c5"} Feb 17 14:31:13 crc kubenswrapper[4836]: I0217 14:31:13.902600 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:31:13 crc kubenswrapper[4836]: I0217 14:31:13.966754 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-cjz8m"] Feb 17 14:31:13 crc kubenswrapper[4836]: I0217 14:31:13.975179 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" podUID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerName="dnsmasq-dns" containerID="cri-o://407f5678203e5e174c01300835b55b61252a1ab248014426970911ab531d756b" gracePeriod=10 Feb 17 14:31:14 crc kubenswrapper[4836]: I0217 14:31:14.220455 4836 generic.go:334] "Generic (PLEG): container finished" podID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerID="407f5678203e5e174c01300835b55b61252a1ab248014426970911ab531d756b" exitCode=0 Feb 17 14:31:14 crc kubenswrapper[4836]: I0217 14:31:14.220579 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" event={"ID":"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d","Type":"ContainerDied","Data":"407f5678203e5e174c01300835b55b61252a1ab248014426970911ab531d756b"} Feb 17 14:31:14 crc kubenswrapper[4836]: I0217 14:31:14.224914 4836 generic.go:334] "Generic (PLEG): container finished" podID="3f9d6a93-3d3a-4c5c-85cf-329209cfe911" containerID="c224cbe49994301a8cf7d7e85623916f9815d0873ee461d723b64e1a3b753f8d" exitCode=0 Feb 17 14:31:14 crc kubenswrapper[4836]: I0217 14:31:14.225183 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lqvvn" event={"ID":"3f9d6a93-3d3a-4c5c-85cf-329209cfe911","Type":"ContainerDied","Data":"c224cbe49994301a8cf7d7e85623916f9815d0873ee461d723b64e1a3b753f8d"} Feb 17 14:31:14 crc kubenswrapper[4836]: I0217 14:31:14.428456 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" podUID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.190:5353: connect: connection refused" Feb 17 14:31:14 crc kubenswrapper[4836]: E0217 14:31:14.799652 4836 info.go:109] Failed to get network devices: open /sys/class/net/a2bfac90ff95a7b/address: no such file or directory Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.075882 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.215607 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn56p\" (UniqueName: \"kubernetes.io/projected/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-kube-api-access-hn56p\") pod \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.215794 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-config-data\") pod \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.215860 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-combined-ca-bundle\") pod \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.215967 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-scripts\") pod \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\" (UID: \"790a788c-3cfe-49c8-b1ff-a83bcedf17e0\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.223487 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-scripts" (OuterVolumeSpecName: "scripts") pod "790a788c-3cfe-49c8-b1ff-a83bcedf17e0" (UID: "790a788c-3cfe-49c8-b1ff-a83bcedf17e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.224036 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-kube-api-access-hn56p" (OuterVolumeSpecName: "kube-api-access-hn56p") pod "790a788c-3cfe-49c8-b1ff-a83bcedf17e0" (UID: "790a788c-3cfe-49c8-b1ff-a83bcedf17e0"). InnerVolumeSpecName "kube-api-access-hn56p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.244625 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" event={"ID":"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d","Type":"ContainerDied","Data":"3d5f1259a1d6811a1bf928961a17e403bc60cdb65dc5d67063f562d7b7e44223"} Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.244685 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d5f1259a1d6811a1bf928961a17e403bc60cdb65dc5d67063f562d7b7e44223" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.253560 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bz94v" event={"ID":"790a788c-3cfe-49c8-b1ff-a83bcedf17e0","Type":"ContainerDied","Data":"a2bfac90ff95a7b3434ec97c87c23bac5422abcf87cd41094de043f68aa74460"} Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.253645 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2bfac90ff95a7b3434ec97c87c23bac5422abcf87cd41094de043f68aa74460" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.255475 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bz94v" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.264346 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-config-data" (OuterVolumeSpecName: "config-data") pod "790a788c-3cfe-49c8-b1ff-a83bcedf17e0" (UID: "790a788c-3cfe-49c8-b1ff-a83bcedf17e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.279757 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "790a788c-3cfe-49c8-b1ff-a83bcedf17e0" (UID: "790a788c-3cfe-49c8-b1ff-a83bcedf17e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.326052 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.326091 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.326105 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.326115 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn56p\" (UniqueName: \"kubernetes.io/projected/790a788c-3cfe-49c8-b1ff-a83bcedf17e0-kube-api-access-hn56p\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.351374 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 14:31:15 crc kubenswrapper[4836]: E0217 14:31:15.352059 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790a788c-3cfe-49c8-b1ff-a83bcedf17e0" containerName="nova-cell1-conductor-db-sync" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.352080 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="790a788c-3cfe-49c8-b1ff-a83bcedf17e0" containerName="nova-cell1-conductor-db-sync" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.352326 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="790a788c-3cfe-49c8-b1ff-a83bcedf17e0" containerName="nova-cell1-conductor-db-sync" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.353253 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.383989 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.409222 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.435554 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed905f2c-85b9-4684-a376-674caf693eca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.435687 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed905f2c-85b9-4684-a376-674caf693eca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.435890 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/ed905f2c-85b9-4684-a376-674caf693eca-kube-api-access-5q92t\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.537282 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tzl2\" (UniqueName: \"kubernetes.io/projected/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-kube-api-access-9tzl2\") pod \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.537397 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-svc\") pod \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.537423 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-sb\") pod \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.537544 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-config\") pod \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.537576 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-nb\") pod \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.537771 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-swift-storage-0\") pod \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\" (UID: \"2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.538184 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed905f2c-85b9-4684-a376-674caf693eca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.538334 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/ed905f2c-85b9-4684-a376-674caf693eca-kube-api-access-5q92t\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.538409 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed905f2c-85b9-4684-a376-674caf693eca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.547549 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed905f2c-85b9-4684-a376-674caf693eca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.550922 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed905f2c-85b9-4684-a376-674caf693eca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.573274 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/ed905f2c-85b9-4684-a376-674caf693eca-kube-api-access-5q92t\") pod \"nova-cell1-conductor-0\" (UID: \"ed905f2c-85b9-4684-a376-674caf693eca\") " pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.601499 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-kube-api-access-9tzl2" (OuterVolumeSpecName: "kube-api-access-9tzl2") pod "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" (UID: "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d"). InnerVolumeSpecName "kube-api-access-9tzl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.629942 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" (UID: "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.637508 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" (UID: "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.641490 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-config" (OuterVolumeSpecName: "config") pod "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" (UID: "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.641643 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tzl2\" (UniqueName: \"kubernetes.io/projected/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-kube-api-access-9tzl2\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.641691 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.641700 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.645469 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" (UID: "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.668567 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" (UID: "2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.742983 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.744375 4836 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.744410 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.744424 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.809649 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.966068 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-config-data\") pod \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.966245 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b2gc\" (UniqueName: \"kubernetes.io/projected/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-kube-api-access-5b2gc\") pod \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.966339 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-combined-ca-bundle\") pod \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.966437 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-scripts\") pod \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\" (UID: \"3f9d6a93-3d3a-4c5c-85cf-329209cfe911\") " Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.974544 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-kube-api-access-5b2gc" (OuterVolumeSpecName: "kube-api-access-5b2gc") pod "3f9d6a93-3d3a-4c5c-85cf-329209cfe911" (UID: "3f9d6a93-3d3a-4c5c-85cf-329209cfe911"). InnerVolumeSpecName "kube-api-access-5b2gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:15 crc kubenswrapper[4836]: I0217 14:31:15.981597 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-scripts" (OuterVolumeSpecName: "scripts") pod "3f9d6a93-3d3a-4c5c-85cf-329209cfe911" (UID: "3f9d6a93-3d3a-4c5c-85cf-329209cfe911"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.008257 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-config-data" (OuterVolumeSpecName: "config-data") pod "3f9d6a93-3d3a-4c5c-85cf-329209cfe911" (UID: "3f9d6a93-3d3a-4c5c-85cf-329209cfe911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.016756 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f9d6a93-3d3a-4c5c-85cf-329209cfe911" (UID: "3f9d6a93-3d3a-4c5c-85cf-329209cfe911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.069736 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.069780 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.069791 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b2gc\" (UniqueName: \"kubernetes.io/projected/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-kube-api-access-5b2gc\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.069804 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f9d6a93-3d3a-4c5c-85cf-329209cfe911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.264086 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 14:31:16 crc kubenswrapper[4836]: W0217 14:31:16.276344 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded905f2c_85b9_4684_a376_674caf693eca.slice/crio-f45852e9108cfa4373def7cb43822ff868e765b991183997db290af1163c4743 WatchSource:0}: Error finding container f45852e9108cfa4373def7cb43822ff868e765b991183997db290af1163c4743: Status 404 returned error can't find the container with id f45852e9108cfa4373def7cb43822ff868e765b991183997db290af1163c4743 Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.278868 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5vl4" event={"ID":"5d52263a-9417-43b6-903c-79e41b1200a0","Type":"ContainerStarted","Data":"d9cc1391f260161a1515210b2ec3643c9f8903ea38174d3e1a0a920c643cd1c4"} Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.289601 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-cjz8m" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.289646 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lqvvn" event={"ID":"3f9d6a93-3d3a-4c5c-85cf-329209cfe911","Type":"ContainerDied","Data":"2372be5c0fdb7f3175c7e34e83cf6164deb631b4fbf708d784b860e58c9b0a29"} Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.289745 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2372be5c0fdb7f3175c7e34e83cf6164deb631b4fbf708d784b860e58c9b0a29" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.289685 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lqvvn" Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.462327 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.462749 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-log" containerID="cri-o://714f99461e2735815fecf65f89d7affa2f50ecfc1ef8586cdf0229b12c705175" gracePeriod=30 Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.462819 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-api" containerID="cri-o://629650e8b16a63777bc8cae654dc487e384a65773313b5678caf2189bce80c69" gracePeriod=30 Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.480742 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.482142 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" containerName="nova-scheduler-scheduler" containerID="cri-o://bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef" gracePeriod=30 Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.644428 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-cjz8m"] Feb 17 14:31:16 crc kubenswrapper[4836]: I0217 14:31:16.669243 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-cjz8m"] Feb 17 14:31:17 crc kubenswrapper[4836]: I0217 14:31:17.304131 4836 generic.go:334] "Generic (PLEG): container finished" podID="c429025c-a79e-425a-987a-773baaba5ef2" containerID="714f99461e2735815fecf65f89d7affa2f50ecfc1ef8586cdf0229b12c705175" exitCode=143 Feb 17 14:31:17 crc kubenswrapper[4836]: I0217 14:31:17.304238 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c429025c-a79e-425a-987a-773baaba5ef2","Type":"ContainerDied","Data":"714f99461e2735815fecf65f89d7affa2f50ecfc1ef8586cdf0229b12c705175"} Feb 17 14:31:17 crc kubenswrapper[4836]: I0217 14:31:17.306152 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ed905f2c-85b9-4684-a376-674caf693eca","Type":"ContainerStarted","Data":"8db8d4f89aaf6691469ef88d0e4bf139a6685bf389a2665674af697b0e174704"} Feb 17 14:31:17 crc kubenswrapper[4836]: I0217 14:31:17.306182 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ed905f2c-85b9-4684-a376-674caf693eca","Type":"ContainerStarted","Data":"f45852e9108cfa4373def7cb43822ff868e765b991183997db290af1163c4743"} Feb 17 14:31:17 crc kubenswrapper[4836]: I0217 14:31:17.308981 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:17 crc kubenswrapper[4836]: I0217 14:31:17.334246 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.334220327 podStartE2EDuration="2.334220327s" podCreationTimestamp="2026-02-17 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:17.325890552 +0000 UTC m=+1503.668818821" watchObservedRunningTime="2026-02-17 14:31:17.334220327 +0000 UTC m=+1503.677148596" Feb 17 14:31:18 crc kubenswrapper[4836]: I0217 14:31:18.321088 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 14:31:18 crc kubenswrapper[4836]: I0217 14:31:18.321146 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 14:31:18 crc kubenswrapper[4836]: E0217 14:31:18.530035 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 14:31:18 crc kubenswrapper[4836]: E0217 14:31:18.537743 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 14:31:18 crc kubenswrapper[4836]: E0217 14:31:18.539847 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 14:31:18 crc kubenswrapper[4836]: E0217 14:31:18.540018 4836 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" containerName="nova-scheduler-scheduler" Feb 17 14:31:18 crc kubenswrapper[4836]: I0217 14:31:18.583141 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" path="/var/lib/kubelet/pods/2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d/volumes" Feb 17 14:31:19 crc kubenswrapper[4836]: I0217 14:31:19.333974 4836 generic.go:334] "Generic (PLEG): container finished" podID="5d52263a-9417-43b6-903c-79e41b1200a0" containerID="d9cc1391f260161a1515210b2ec3643c9f8903ea38174d3e1a0a920c643cd1c4" exitCode=0 Feb 17 14:31:19 crc kubenswrapper[4836]: I0217 14:31:19.334099 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5vl4" event={"ID":"5d52263a-9417-43b6-903c-79e41b1200a0","Type":"ContainerDied","Data":"d9cc1391f260161a1515210b2ec3643c9f8903ea38174d3e1a0a920c643cd1c4"} Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.350626 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r5vl4" event={"ID":"5d52263a-9417-43b6-903c-79e41b1200a0","Type":"ContainerStarted","Data":"981c2da3bead75035e25520c28ec32c6fc29417547882d5a18e37555cbfe229b"} Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.353432 4836 generic.go:334] "Generic (PLEG): container finished" podID="c429025c-a79e-425a-987a-773baaba5ef2" containerID="629650e8b16a63777bc8cae654dc487e384a65773313b5678caf2189bce80c69" exitCode=0 Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.353473 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c429025c-a79e-425a-987a-773baaba5ef2","Type":"ContainerDied","Data":"629650e8b16a63777bc8cae654dc487e384a65773313b5678caf2189bce80c69"} Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.384600 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r5vl4" podStartSLOduration=2.846713621 podStartE2EDuration="45.384572156s" podCreationTimestamp="2026-02-17 14:30:35 +0000 UTC" firstStartedPulling="2026-02-17 14:30:37.329182088 +0000 UTC m=+1463.672110357" lastFinishedPulling="2026-02-17 14:31:19.867040623 +0000 UTC m=+1506.209968892" observedRunningTime="2026-02-17 14:31:20.378123171 +0000 UTC m=+1506.721051440" watchObservedRunningTime="2026-02-17 14:31:20.384572156 +0000 UTC m=+1506.727500425" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.671529 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.809439 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-config-data\") pod \"c429025c-a79e-425a-987a-773baaba5ef2\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.809582 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-combined-ca-bundle\") pod \"c429025c-a79e-425a-987a-773baaba5ef2\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.809724 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n528b\" (UniqueName: \"kubernetes.io/projected/c429025c-a79e-425a-987a-773baaba5ef2-kube-api-access-n528b\") pod \"c429025c-a79e-425a-987a-773baaba5ef2\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.809891 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c429025c-a79e-425a-987a-773baaba5ef2-logs\") pod \"c429025c-a79e-425a-987a-773baaba5ef2\" (UID: \"c429025c-a79e-425a-987a-773baaba5ef2\") " Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.818837 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c429025c-a79e-425a-987a-773baaba5ef2-logs" (OuterVolumeSpecName: "logs") pod "c429025c-a79e-425a-987a-773baaba5ef2" (UID: "c429025c-a79e-425a-987a-773baaba5ef2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.832500 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c429025c-a79e-425a-987a-773baaba5ef2-kube-api-access-n528b" (OuterVolumeSpecName: "kube-api-access-n528b") pod "c429025c-a79e-425a-987a-773baaba5ef2" (UID: "c429025c-a79e-425a-987a-773baaba5ef2"). InnerVolumeSpecName "kube-api-access-n528b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.862983 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-config-data" (OuterVolumeSpecName: "config-data") pod "c429025c-a79e-425a-987a-773baaba5ef2" (UID: "c429025c-a79e-425a-987a-773baaba5ef2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.904745 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c429025c-a79e-425a-987a-773baaba5ef2" (UID: "c429025c-a79e-425a-987a-773baaba5ef2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.925129 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.925215 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c429025c-a79e-425a-987a-773baaba5ef2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.925242 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n528b\" (UniqueName: \"kubernetes.io/projected/c429025c-a79e-425a-987a-773baaba5ef2-kube-api-access-n528b\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.925272 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c429025c-a79e-425a-987a-773baaba5ef2-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:20 crc kubenswrapper[4836]: E0217 14:31:20.954090 4836 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c442bd_1a4d_4e8f_b3b2_c2e6c97faeed.slice/crio-bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c442bd_1a4d_4e8f_b3b2_c2e6c97faeed.slice/crio-conmon-bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef.scope\": RecentStats: unable to find data in memory cache]" Feb 17 14:31:20 crc kubenswrapper[4836]: I0217 14:31:20.963340 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.128773 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-combined-ca-bundle\") pod \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.129071 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-config-data\") pod \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.129254 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6c45\" (UniqueName: \"kubernetes.io/projected/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-kube-api-access-w6c45\") pod \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\" (UID: \"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed\") " Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.151476 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-kube-api-access-w6c45" (OuterVolumeSpecName: "kube-api-access-w6c45") pod "50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" (UID: "50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed"). InnerVolumeSpecName "kube-api-access-w6c45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.171267 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" (UID: "50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.180764 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-config-data" (OuterVolumeSpecName: "config-data") pod "50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" (UID: "50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.232450 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.232496 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.232509 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6c45\" (UniqueName: \"kubernetes.io/projected/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed-kube-api-access-w6c45\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.369602 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c429025c-a79e-425a-987a-773baaba5ef2","Type":"ContainerDied","Data":"156a656ff70823725df87467a6303f3fbea750b4b0136e5d302f89802c6a5c93"} Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.369646 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.369669 4836 scope.go:117] "RemoveContainer" containerID="629650e8b16a63777bc8cae654dc487e384a65773313b5678caf2189bce80c69" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.372834 4836 generic.go:334] "Generic (PLEG): container finished" podID="50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" containerID="bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef" exitCode=0 Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.372873 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed","Type":"ContainerDied","Data":"bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef"} Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.372894 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed","Type":"ContainerDied","Data":"7520be2580517ac140f3d2f437db810e791805bf994d997d530a396b328fa465"} Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.372939 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.418685 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.434982 4836 scope.go:117] "RemoveContainer" containerID="714f99461e2735815fecf65f89d7affa2f50ecfc1ef8586cdf0229b12c705175" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.446176 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.464451 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.482478 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.499342 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:31:21 crc kubenswrapper[4836]: E0217 14:31:21.500134 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerName="init" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500157 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerName="init" Feb 17 14:31:21 crc kubenswrapper[4836]: E0217 14:31:21.500172 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" containerName="nova-scheduler-scheduler" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500179 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" containerName="nova-scheduler-scheduler" Feb 17 14:31:21 crc kubenswrapper[4836]: E0217 14:31:21.500195 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9d6a93-3d3a-4c5c-85cf-329209cfe911" containerName="nova-manage" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500203 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9d6a93-3d3a-4c5c-85cf-329209cfe911" containerName="nova-manage" Feb 17 14:31:21 crc kubenswrapper[4836]: E0217 14:31:21.500235 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-api" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500246 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-api" Feb 17 14:31:21 crc kubenswrapper[4836]: E0217 14:31:21.500266 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerName="dnsmasq-dns" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500274 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerName="dnsmasq-dns" Feb 17 14:31:21 crc kubenswrapper[4836]: E0217 14:31:21.500305 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-log" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500312 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-log" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500568 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" containerName="nova-scheduler-scheduler" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500591 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-log" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500607 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb84cd5-666c-4cf2-94df-5b51d0e3fa6d" containerName="dnsmasq-dns" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500628 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9d6a93-3d3a-4c5c-85cf-329209cfe911" containerName="nova-manage" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.500646 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c429025c-a79e-425a-987a-773baaba5ef2" containerName="nova-api-api" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.501686 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.502896 4836 scope.go:117] "RemoveContainer" containerID="bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.504574 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.519602 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.534789 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.537919 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.545259 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.551624 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.569165 4836 scope.go:117] "RemoveContainer" containerID="bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef" Feb 17 14:31:21 crc kubenswrapper[4836]: E0217 14:31:21.569934 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef\": container with ID starting with bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef not found: ID does not exist" containerID="bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.570009 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef"} err="failed to get container status \"bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef\": rpc error: code = NotFound desc = could not find container \"bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef\": container with ID starting with bcf6221fea7b134ba44f91202f344506361b2f3cd1e454aa804b4124188e38ef not found: ID does not exist" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.644757 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbrvs\" (UniqueName: \"kubernetes.io/projected/1853ac32-f733-4d5f-9cc2-edf83a927b28-kube-api-access-lbrvs\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.644819 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.645175 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.645618 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-config-data\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.645666 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvc2r\" (UniqueName: \"kubernetes.io/projected/cf284e7d-7c68-4688-9e14-87e9c32f6c41-kube-api-access-rvc2r\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.646016 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-config-data\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.646267 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf284e7d-7c68-4688-9e14-87e9c32f6c41-logs\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.748884 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-config-data\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.748941 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvc2r\" (UniqueName: \"kubernetes.io/projected/cf284e7d-7c68-4688-9e14-87e9c32f6c41-kube-api-access-rvc2r\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.749016 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-config-data\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.750334 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf284e7d-7c68-4688-9e14-87e9c32f6c41-logs\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.750783 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf284e7d-7c68-4688-9e14-87e9c32f6c41-logs\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.751009 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbrvs\" (UniqueName: \"kubernetes.io/projected/1853ac32-f733-4d5f-9cc2-edf83a927b28-kube-api-access-lbrvs\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.751637 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.752170 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.753848 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-config-data\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.758626 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.758720 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.760660 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-config-data\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.776531 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvc2r\" (UniqueName: \"kubernetes.io/projected/cf284e7d-7c68-4688-9e14-87e9c32f6c41-kube-api-access-rvc2r\") pod \"nova-api-0\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " pod="openstack/nova-api-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.778078 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbrvs\" (UniqueName: \"kubernetes.io/projected/1853ac32-f733-4d5f-9cc2-edf83a927b28-kube-api-access-lbrvs\") pod \"nova-scheduler-0\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.822031 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:31:21 crc kubenswrapper[4836]: I0217 14:31:21.884173 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:31:22 crc kubenswrapper[4836]: I0217 14:31:22.413125 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:31:22 crc kubenswrapper[4836]: W0217 14:31:22.579350 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf284e7d_7c68_4688_9e14_87e9c32f6c41.slice/crio-be3d92134965e9daf67db43dfe412847ed9ee954252aa5228b3fe8a1f26a6f7a WatchSource:0}: Error finding container be3d92134965e9daf67db43dfe412847ed9ee954252aa5228b3fe8a1f26a6f7a: Status 404 returned error can't find the container with id be3d92134965e9daf67db43dfe412847ed9ee954252aa5228b3fe8a1f26a6f7a Feb 17 14:31:22 crc kubenswrapper[4836]: I0217 14:31:22.589180 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed" path="/var/lib/kubelet/pods/50c442bd-1a4d-4e8f-b3b2-c2e6c97faeed/volumes" Feb 17 14:31:22 crc kubenswrapper[4836]: I0217 14:31:22.590046 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c429025c-a79e-425a-987a-773baaba5ef2" path="/var/lib/kubelet/pods/c429025c-a79e-425a-987a-773baaba5ef2/volumes" Feb 17 14:31:22 crc kubenswrapper[4836]: I0217 14:31:22.590784 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:23 crc kubenswrapper[4836]: I0217 14:31:23.400007 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf284e7d-7c68-4688-9e14-87e9c32f6c41","Type":"ContainerStarted","Data":"6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047"} Feb 17 14:31:23 crc kubenswrapper[4836]: I0217 14:31:23.400373 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf284e7d-7c68-4688-9e14-87e9c32f6c41","Type":"ContainerStarted","Data":"d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e"} Feb 17 14:31:23 crc kubenswrapper[4836]: I0217 14:31:23.400387 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf284e7d-7c68-4688-9e14-87e9c32f6c41","Type":"ContainerStarted","Data":"be3d92134965e9daf67db43dfe412847ed9ee954252aa5228b3fe8a1f26a6f7a"} Feb 17 14:31:23 crc kubenswrapper[4836]: I0217 14:31:23.405733 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1853ac32-f733-4d5f-9cc2-edf83a927b28","Type":"ContainerStarted","Data":"cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1"} Feb 17 14:31:23 crc kubenswrapper[4836]: I0217 14:31:23.405791 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1853ac32-f733-4d5f-9cc2-edf83a927b28","Type":"ContainerStarted","Data":"942beb1e7c7f15eec98870c7e5614fc6da7b9d580327cb6b8b021c40fa96a882"} Feb 17 14:31:23 crc kubenswrapper[4836]: I0217 14:31:23.433218 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.433185308 podStartE2EDuration="2.433185308s" podCreationTimestamp="2026-02-17 14:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:23.424994186 +0000 UTC m=+1509.767922455" watchObservedRunningTime="2026-02-17 14:31:23.433185308 +0000 UTC m=+1509.776113587" Feb 17 14:31:23 crc kubenswrapper[4836]: I0217 14:31:23.450979 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.45094841 podStartE2EDuration="2.45094841s" podCreationTimestamp="2026-02-17 14:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:23.445956785 +0000 UTC m=+1509.788885074" watchObservedRunningTime="2026-02-17 14:31:23.45094841 +0000 UTC m=+1509.793876689" Feb 17 14:31:25 crc kubenswrapper[4836]: I0217 14:31:25.775279 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 14:31:26 crc kubenswrapper[4836]: I0217 14:31:26.135422 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:31:26 crc kubenswrapper[4836]: I0217 14:31:26.135498 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:31:26 crc kubenswrapper[4836]: I0217 14:31:26.823046 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 14:31:27 crc kubenswrapper[4836]: I0217 14:31:27.193392 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r5vl4" podUID="5d52263a-9417-43b6-903c-79e41b1200a0" containerName="registry-server" probeResult="failure" output=< Feb 17 14:31:27 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:31:27 crc kubenswrapper[4836]: > Feb 17 14:31:29 crc kubenswrapper[4836]: I0217 14:31:29.604399 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7977b0a7-fd9c-4d3c-bc21-fbf9d0e70506" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.210:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 14:31:31 crc kubenswrapper[4836]: I0217 14:31:31.773009 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 14:31:31 crc kubenswrapper[4836]: I0217 14:31:31.823014 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 14:31:31 crc kubenswrapper[4836]: I0217 14:31:31.858137 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 14:31:31 crc kubenswrapper[4836]: I0217 14:31:31.885714 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 14:31:31 crc kubenswrapper[4836]: I0217 14:31:31.885779 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 14:31:32 crc kubenswrapper[4836]: I0217 14:31:32.565364 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 14:31:32 crc kubenswrapper[4836]: I0217 14:31:32.967547 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 14:31:32 crc kubenswrapper[4836]: I0217 14:31:32.967543 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 14:31:35 crc kubenswrapper[4836]: I0217 14:31:35.897712 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:31:35 crc kubenswrapper[4836]: I0217 14:31:35.898238 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="87197028-3222-4c04-89a7-135997258e0d" containerName="kube-state-metrics" containerID="cri-o://6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9" gracePeriod=30 Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.213556 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.308219 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r5vl4" Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.399420 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r5vl4"] Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.475795 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-89b2r"] Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.476108 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-89b2r" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="registry-server" containerID="cri-o://b84dd65de54881081222d1401d684becd3ab6f396a5d3ddb1a10e413f4f858e0" gracePeriod=2 Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.588959 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.597537 4836 generic.go:334] "Generic (PLEG): container finished" podID="87197028-3222-4c04-89a7-135997258e0d" containerID="6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9" exitCode=2 Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.598053 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.603532 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"87197028-3222-4c04-89a7-135997258e0d","Type":"ContainerDied","Data":"6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9"} Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.603625 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"87197028-3222-4c04-89a7-135997258e0d","Type":"ContainerDied","Data":"ac1cfd9dcf6c1abc6e025d9d148f792c18f57e036146c98f80e4d81f4745553b"} Feb 17 14:31:36 crc kubenswrapper[4836]: I0217 14:31:36.603647 4836 scope.go:117] "RemoveContainer" containerID="6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.004663 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnv8r\" (UniqueName: \"kubernetes.io/projected/87197028-3222-4c04-89a7-135997258e0d-kube-api-access-wnv8r\") pod \"87197028-3222-4c04-89a7-135997258e0d\" (UID: \"87197028-3222-4c04-89a7-135997258e0d\") " Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.032280 4836 scope.go:117] "RemoveContainer" containerID="6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9" Feb 17 14:31:37 crc kubenswrapper[4836]: E0217 14:31:37.036810 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9\": container with ID starting with 6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9 not found: ID does not exist" containerID="6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.036887 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9"} err="failed to get container status \"6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9\": rpc error: code = NotFound desc = could not find container \"6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9\": container with ID starting with 6d20de79ec5b2889e85dc2a14e5a092567cc4ce841befd2fa670b857d176cad9 not found: ID does not exist" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.083631 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87197028-3222-4c04-89a7-135997258e0d-kube-api-access-wnv8r" (OuterVolumeSpecName: "kube-api-access-wnv8r") pod "87197028-3222-4c04-89a7-135997258e0d" (UID: "87197028-3222-4c04-89a7-135997258e0d"). InnerVolumeSpecName "kube-api-access-wnv8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.112001 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnv8r\" (UniqueName: \"kubernetes.io/projected/87197028-3222-4c04-89a7-135997258e0d-kube-api-access-wnv8r\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.369014 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.389684 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.430305 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:31:37 crc kubenswrapper[4836]: E0217 14:31:37.431136 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87197028-3222-4c04-89a7-135997258e0d" containerName="kube-state-metrics" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.431153 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="87197028-3222-4c04-89a7-135997258e0d" containerName="kube-state-metrics" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.433320 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="87197028-3222-4c04-89a7-135997258e0d" containerName="kube-state-metrics" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.434809 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.439195 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.459689 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.480678 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.536823 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhk2k\" (UniqueName: \"kubernetes.io/projected/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-api-access-qhk2k\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.537249 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.537355 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.537384 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.616648 4836 generic.go:334] "Generic (PLEG): container finished" podID="73de5f3f-982c-4471-b91b-e3725da6be03" containerID="62c716ef584f99d01d3c8b6a75eeb1c7c51deabb2ccc02590847f8e3c96e8ddb" exitCode=137 Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.616734 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73de5f3f-982c-4471-b91b-e3725da6be03","Type":"ContainerDied","Data":"62c716ef584f99d01d3c8b6a75eeb1c7c51deabb2ccc02590847f8e3c96e8ddb"} Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.622007 4836 generic.go:334] "Generic (PLEG): container finished" podID="cc99d806-e359-4577-8a61-1b527af8779f" containerID="b84dd65de54881081222d1401d684becd3ab6f396a5d3ddb1a10e413f4f858e0" exitCode=0 Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.622207 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89b2r" event={"ID":"cc99d806-e359-4577-8a61-1b527af8779f","Type":"ContainerDied","Data":"b84dd65de54881081222d1401d684becd3ab6f396a5d3ddb1a10e413f4f858e0"} Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.622262 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89b2r" event={"ID":"cc99d806-e359-4577-8a61-1b527af8779f","Type":"ContainerDied","Data":"7917a2258074c4b89c2b9c207136528b694ef3fdf3891f163bd96f2105f7c9c7"} Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.622277 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7917a2258074c4b89c2b9c207136528b694ef3fdf3891f163bd96f2105f7c9c7" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.629889 4836 generic.go:334] "Generic (PLEG): container finished" podID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerID="7b113b343450d9b94ea6adbfe150b4f215416e1008c8bb123e85d6f538396d6e" exitCode=137 Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.630220 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d6e757d-b7e9-417b-a63e-94879c7f3f74","Type":"ContainerDied","Data":"7b113b343450d9b94ea6adbfe150b4f215416e1008c8bb123e85d6f538396d6e"} Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.638740 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhk2k\" (UniqueName: \"kubernetes.io/projected/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-api-access-qhk2k\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.638822 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.641391 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.641468 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.647611 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.653090 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.656741 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8809e181-9f70-4810-97e8-6fc4c9e3561a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.669381 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhk2k\" (UniqueName: \"kubernetes.io/projected/8809e181-9f70-4810-97e8-6fc4c9e3561a-kube-api-access-qhk2k\") pod \"kube-state-metrics-0\" (UID: \"8809e181-9f70-4810-97e8-6fc4c9e3561a\") " pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.702756 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.749477 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-catalog-content\") pod \"cc99d806-e359-4577-8a61-1b527af8779f\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.749748 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97574\" (UniqueName: \"kubernetes.io/projected/cc99d806-e359-4577-8a61-1b527af8779f-kube-api-access-97574\") pod \"cc99d806-e359-4577-8a61-1b527af8779f\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.749950 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-utilities\") pod \"cc99d806-e359-4577-8a61-1b527af8779f\" (UID: \"cc99d806-e359-4577-8a61-1b527af8779f\") " Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.762984 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-utilities" (OuterVolumeSpecName: "utilities") pod "cc99d806-e359-4577-8a61-1b527af8779f" (UID: "cc99d806-e359-4577-8a61-1b527af8779f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.770037 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc99d806-e359-4577-8a61-1b527af8779f-kube-api-access-97574" (OuterVolumeSpecName: "kube-api-access-97574") pod "cc99d806-e359-4577-8a61-1b527af8779f" (UID: "cc99d806-e359-4577-8a61-1b527af8779f"). InnerVolumeSpecName "kube-api-access-97574". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.788363 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.872714 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.872771 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97574\" (UniqueName: \"kubernetes.io/projected/cc99d806-e359-4577-8a61-1b527af8779f-kube-api-access-97574\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.918254 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.974868 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-combined-ca-bundle\") pod \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.975568 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-config-data\") pod \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.975812 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czlhp\" (UniqueName: \"kubernetes.io/projected/3d6e757d-b7e9-417b-a63e-94879c7f3f74-kube-api-access-czlhp\") pod \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.975877 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6e757d-b7e9-417b-a63e-94879c7f3f74-logs\") pod \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\" (UID: \"3d6e757d-b7e9-417b-a63e-94879c7f3f74\") " Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.980085 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6e757d-b7e9-417b-a63e-94879c7f3f74-logs" (OuterVolumeSpecName: "logs") pod "3d6e757d-b7e9-417b-a63e-94879c7f3f74" (UID: "3d6e757d-b7e9-417b-a63e-94879c7f3f74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:37 crc kubenswrapper[4836]: I0217 14:31:37.987401 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6e757d-b7e9-417b-a63e-94879c7f3f74-kube-api-access-czlhp" (OuterVolumeSpecName: "kube-api-access-czlhp") pod "3d6e757d-b7e9-417b-a63e-94879c7f3f74" (UID: "3d6e757d-b7e9-417b-a63e-94879c7f3f74"). InnerVolumeSpecName "kube-api-access-czlhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.006165 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc99d806-e359-4577-8a61-1b527af8779f" (UID: "cc99d806-e359-4577-8a61-1b527af8779f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.009140 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.051711 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-config-data" (OuterVolumeSpecName: "config-data") pod "3d6e757d-b7e9-417b-a63e-94879c7f3f74" (UID: "3d6e757d-b7e9-417b-a63e-94879c7f3f74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.054520 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d6e757d-b7e9-417b-a63e-94879c7f3f74" (UID: "3d6e757d-b7e9-417b-a63e-94879c7f3f74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.078576 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69kdk\" (UniqueName: \"kubernetes.io/projected/73de5f3f-982c-4471-b91b-e3725da6be03-kube-api-access-69kdk\") pod \"73de5f3f-982c-4471-b91b-e3725da6be03\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.078748 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-config-data\") pod \"73de5f3f-982c-4471-b91b-e3725da6be03\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.078918 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-combined-ca-bundle\") pod \"73de5f3f-982c-4471-b91b-e3725da6be03\" (UID: \"73de5f3f-982c-4471-b91b-e3725da6be03\") " Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.080175 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czlhp\" (UniqueName: \"kubernetes.io/projected/3d6e757d-b7e9-417b-a63e-94879c7f3f74-kube-api-access-czlhp\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.080201 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6e757d-b7e9-417b-a63e-94879c7f3f74-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.080232 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99d806-e359-4577-8a61-1b527af8779f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.080249 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.080264 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6e757d-b7e9-417b-a63e-94879c7f3f74-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.085129 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73de5f3f-982c-4471-b91b-e3725da6be03-kube-api-access-69kdk" (OuterVolumeSpecName: "kube-api-access-69kdk") pod "73de5f3f-982c-4471-b91b-e3725da6be03" (UID: "73de5f3f-982c-4471-b91b-e3725da6be03"). InnerVolumeSpecName "kube-api-access-69kdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.150539 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-config-data" (OuterVolumeSpecName: "config-data") pod "73de5f3f-982c-4471-b91b-e3725da6be03" (UID: "73de5f3f-982c-4471-b91b-e3725da6be03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.150653 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73de5f3f-982c-4471-b91b-e3725da6be03" (UID: "73de5f3f-982c-4471-b91b-e3725da6be03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.181901 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.181938 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73de5f3f-982c-4471-b91b-e3725da6be03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.181950 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69kdk\" (UniqueName: \"kubernetes.io/projected/73de5f3f-982c-4471-b91b-e3725da6be03-kube-api-access-69kdk\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.425901 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 14:31:38 crc kubenswrapper[4836]: W0217 14:31:38.439480 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8809e181_9f70_4810_97e8_6fc4c9e3561a.slice/crio-da9d483ebf10af68921e181af3aa5a70697e99df3f7ea62fd5c7d1a9301dc8d3 WatchSource:0}: Error finding container da9d483ebf10af68921e181af3aa5a70697e99df3f7ea62fd5c7d1a9301dc8d3: Status 404 returned error can't find the container with id da9d483ebf10af68921e181af3aa5a70697e99df3f7ea62fd5c7d1a9301dc8d3 Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.581815 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87197028-3222-4c04-89a7-135997258e0d" path="/var/lib/kubelet/pods/87197028-3222-4c04-89a7-135997258e0d/volumes" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.646548 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8809e181-9f70-4810-97e8-6fc4c9e3561a","Type":"ContainerStarted","Data":"da9d483ebf10af68921e181af3aa5a70697e99df3f7ea62fd5c7d1a9301dc8d3"} Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.649587 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.649578 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d6e757d-b7e9-417b-a63e-94879c7f3f74","Type":"ContainerDied","Data":"b6f643b62e3a190c9d6903fa464d7953ef4da0c0399e9c74ba601860f623c445"} Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.649759 4836 scope.go:117] "RemoveContainer" containerID="7b113b343450d9b94ea6adbfe150b4f215416e1008c8bb123e85d6f538396d6e" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.661635 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89b2r" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.662710 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.663153 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73de5f3f-982c-4471-b91b-e3725da6be03","Type":"ContainerDied","Data":"73eb695666ad36f06c437207b1c5c555a1d75dfaa1f2995189afe99e230da0da"} Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.713058 4836 scope.go:117] "RemoveContainer" containerID="376eec79f26ee001240c9935be991e0037ae7235786eba961156abcccae1aa84" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.759090 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.773386 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.776786 4836 scope.go:117] "RemoveContainer" containerID="62c716ef584f99d01d3c8b6a75eeb1c7c51deabb2ccc02590847f8e3c96e8ddb" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.791266 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.838731 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.884891 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-89b2r"] Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.934089 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:31:38 crc kubenswrapper[4836]: E0217 14:31:38.934925 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="extract-content" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.934949 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="extract-content" Feb 17 14:31:38 crc kubenswrapper[4836]: E0217 14:31:38.934974 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="registry-server" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.934981 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="registry-server" Feb 17 14:31:38 crc kubenswrapper[4836]: E0217 14:31:38.934999 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerName="nova-metadata-metadata" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.935005 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerName="nova-metadata-metadata" Feb 17 14:31:38 crc kubenswrapper[4836]: E0217 14:31:38.935019 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="extract-utilities" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.935025 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="extract-utilities" Feb 17 14:31:38 crc kubenswrapper[4836]: E0217 14:31:38.935038 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73de5f3f-982c-4471-b91b-e3725da6be03" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.935068 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="73de5f3f-982c-4471-b91b-e3725da6be03" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 14:31:38 crc kubenswrapper[4836]: E0217 14:31:38.935094 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerName="nova-metadata-log" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.935100 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerName="nova-metadata-log" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.935346 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerName="nova-metadata-metadata" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.935360 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc99d806-e359-4577-8a61-1b527af8779f" containerName="registry-server" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.935389 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" containerName="nova-metadata-log" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.935407 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="73de5f3f-982c-4471-b91b-e3725da6be03" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.938364 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.941706 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.943278 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 14:31:38 crc kubenswrapper[4836]: I0217 14:31:38.965543 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-89b2r"] Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.021387 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.023595 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.028312 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.028619 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.028751 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.048379 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.066495 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.070114 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-config-data\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.070357 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlb4\" (UniqueName: \"kubernetes.io/projected/327aaf35-8278-4f1a-b369-7a40209c0a8e-kube-api-access-mnlb4\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.070429 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.070465 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/327aaf35-8278-4f1a-b369-7a40209c0a8e-logs\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.070652 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.174126 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.174533 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.174646 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-config-data\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.174686 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.175145 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf7r6\" (UniqueName: \"kubernetes.io/projected/6d9c8dd5-2ccb-4656-a059-352c03aa923d-kube-api-access-kf7r6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.175258 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.175437 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlb4\" (UniqueName: \"kubernetes.io/projected/327aaf35-8278-4f1a-b369-7a40209c0a8e-kube-api-access-mnlb4\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.175575 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.175613 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.175662 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/327aaf35-8278-4f1a-b369-7a40209c0a8e-logs\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.183153 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/327aaf35-8278-4f1a-b369-7a40209c0a8e-logs\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.186974 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-config-data\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.191739 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.192456 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.201990 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlb4\" (UniqueName: \"kubernetes.io/projected/327aaf35-8278-4f1a-b369-7a40209c0a8e-kube-api-access-mnlb4\") pod \"nova-metadata-0\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.280401 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.281387 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf7r6\" (UniqueName: \"kubernetes.io/projected/6d9c8dd5-2ccb-4656-a059-352c03aa923d-kube-api-access-kf7r6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.281477 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.281544 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.281728 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.281886 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.288350 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.637657 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf7r6\" (UniqueName: \"kubernetes.io/projected/6d9c8dd5-2ccb-4656-a059-352c03aa923d-kube-api-access-kf7r6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.678547 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.858811 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:39 crc kubenswrapper[4836]: I0217 14:31:39.874568 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9c8dd5-2ccb-4656-a059-352c03aa923d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d9c8dd5-2ccb-4656-a059-352c03aa923d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.056477 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.438944 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.582552 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6e757d-b7e9-417b-a63e-94879c7f3f74" path="/var/lib/kubelet/pods/3d6e757d-b7e9-417b-a63e-94879c7f3f74/volumes" Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.583506 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73de5f3f-982c-4471-b91b-e3725da6be03" path="/var/lib/kubelet/pods/73de5f3f-982c-4471-b91b-e3725da6be03/volumes" Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.584968 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc99d806-e359-4577-8a61-1b527af8779f" path="/var/lib/kubelet/pods/cc99d806-e359-4577-8a61-1b527af8779f/volumes" Feb 17 14:31:40 crc kubenswrapper[4836]: W0217 14:31:40.669370 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d9c8dd5_2ccb_4656_a059_352c03aa923d.slice/crio-98473541f824241a507b6901f4d765f231afe6dff37931dfef86a14dd772cd3e WatchSource:0}: Error finding container 98473541f824241a507b6901f4d765f231afe6dff37931dfef86a14dd772cd3e: Status 404 returned error can't find the container with id 98473541f824241a507b6901f4d765f231afe6dff37931dfef86a14dd772cd3e Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.688802 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.933235 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"327aaf35-8278-4f1a-b369-7a40209c0a8e","Type":"ContainerStarted","Data":"e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e"} Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.933344 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"327aaf35-8278-4f1a-b369-7a40209c0a8e","Type":"ContainerStarted","Data":"f0a3643ff133a4988442a790112d5d5fb30bb8fc7d8f8119a27a6cf6da2e8bfc"} Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.937547 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d9c8dd5-2ccb-4656-a059-352c03aa923d","Type":"ContainerStarted","Data":"98473541f824241a507b6901f4d765f231afe6dff37931dfef86a14dd772cd3e"} Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.950617 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8809e181-9f70-4810-97e8-6fc4c9e3561a","Type":"ContainerStarted","Data":"a350324cb5b9c856ef8a34e5ef41d3d953463dc1e3c10a657f4906005906c69b"} Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.951594 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.997262 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.997831 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="ceilometer-central-agent" containerID="cri-o://061b81d1777bbf667556b595b5cc63218a0966b7aa3e7690e90b4b84a2173bed" gracePeriod=30 Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.998147 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="proxy-httpd" containerID="cri-o://bd01937315b50abe24d9b50b0b169dd1610243cceeeeb3bd7fb5017e3522ad76" gracePeriod=30 Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.998219 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="ceilometer-notification-agent" containerID="cri-o://f38b096e0cb174cc64328d45e0488ca7d9888e5f9a99f36292e400cf1d4fd13d" gracePeriod=30 Feb 17 14:31:40 crc kubenswrapper[4836]: I0217 14:31:40.998345 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="sg-core" containerID="cri-o://b296d87fd7976e77f9e6129f3c2eff1cfb26664d576e803cadb9560994f76f7d" gracePeriod=30 Feb 17 14:31:41 crc kubenswrapper[4836]: I0217 14:31:41.001710 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.472179833 podStartE2EDuration="4.00167836s" podCreationTimestamp="2026-02-17 14:31:37 +0000 UTC" firstStartedPulling="2026-02-17 14:31:38.443092136 +0000 UTC m=+1524.786020405" lastFinishedPulling="2026-02-17 14:31:38.972590663 +0000 UTC m=+1525.315518932" observedRunningTime="2026-02-17 14:31:40.977210657 +0000 UTC m=+1527.320138936" watchObservedRunningTime="2026-02-17 14:31:41.00167836 +0000 UTC m=+1527.344606619" Feb 17 14:31:41 crc kubenswrapper[4836]: E0217 14:31:41.605395 4836 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68b35111_581a_4e2e_9fae_3e0248674655.slice/crio-conmon-bd01937315b50abe24d9b50b0b169dd1610243cceeeeb3bd7fb5017e3522ad76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68b35111_581a_4e2e_9fae_3e0248674655.slice/crio-bd01937315b50abe24d9b50b0b169dd1610243cceeeeb3bd7fb5017e3522ad76.scope\": RecentStats: unable to find data in memory cache]" Feb 17 14:31:41 crc kubenswrapper[4836]: I0217 14:31:41.897970 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 14:31:41 crc kubenswrapper[4836]: I0217 14:31:41.899048 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 14:31:41 crc kubenswrapper[4836]: I0217 14:31:41.904710 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 14:31:41 crc kubenswrapper[4836]: I0217 14:31:41.909685 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.031493 4836 generic.go:334] "Generic (PLEG): container finished" podID="68b35111-581a-4e2e-9fae-3e0248674655" containerID="bd01937315b50abe24d9b50b0b169dd1610243cceeeeb3bd7fb5017e3522ad76" exitCode=0 Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.031535 4836 generic.go:334] "Generic (PLEG): container finished" podID="68b35111-581a-4e2e-9fae-3e0248674655" containerID="b296d87fd7976e77f9e6129f3c2eff1cfb26664d576e803cadb9560994f76f7d" exitCode=2 Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.031542 4836 generic.go:334] "Generic (PLEG): container finished" podID="68b35111-581a-4e2e-9fae-3e0248674655" containerID="061b81d1777bbf667556b595b5cc63218a0966b7aa3e7690e90b4b84a2173bed" exitCode=0 Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.031589 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerDied","Data":"bd01937315b50abe24d9b50b0b169dd1610243cceeeeb3bd7fb5017e3522ad76"} Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.031622 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerDied","Data":"b296d87fd7976e77f9e6129f3c2eff1cfb26664d576e803cadb9560994f76f7d"} Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.031632 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerDied","Data":"061b81d1777bbf667556b595b5cc63218a0966b7aa3e7690e90b4b84a2173bed"} Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.035525 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d9c8dd5-2ccb-4656-a059-352c03aa923d","Type":"ContainerStarted","Data":"ac9a5d0baf9704bc40414219f5c0c2559f1d8f4e79d885d3018faf83fc960618"} Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.041030 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"327aaf35-8278-4f1a-b369-7a40209c0a8e","Type":"ContainerStarted","Data":"a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3"} Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.041072 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.075177 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.085178 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.085150738 podStartE2EDuration="4.085150738s" podCreationTimestamp="2026-02-17 14:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:42.058058524 +0000 UTC m=+1528.400986793" watchObservedRunningTime="2026-02-17 14:31:42.085150738 +0000 UTC m=+1528.428079007" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.111188 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.111158453 podStartE2EDuration="4.111158453s" podCreationTimestamp="2026-02-17 14:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:42.090152034 +0000 UTC m=+1528.433080303" watchObservedRunningTime="2026-02-17 14:31:42.111158453 +0000 UTC m=+1528.454086722" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.348183 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-snjhj"] Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.351288 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.377023 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-snjhj"] Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.527076 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.527194 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.527235 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.527316 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.527375 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-config\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.527399 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz7bl\" (UniqueName: \"kubernetes.io/projected/6dc084a0-be89-4371-92a3-181cfe1979ce-kube-api-access-hz7bl\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.629878 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.629985 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.630022 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.630089 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.630137 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-config\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.630163 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz7bl\" (UniqueName: \"kubernetes.io/projected/6dc084a0-be89-4371-92a3-181cfe1979ce-kube-api-access-hz7bl\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.631020 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.631267 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.631673 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.631884 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-config\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.632287 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dc084a0-be89-4371-92a3-181cfe1979ce-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.655112 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz7bl\" (UniqueName: \"kubernetes.io/projected/6dc084a0-be89-4371-92a3-181cfe1979ce-kube-api-access-hz7bl\") pod \"dnsmasq-dns-5fd9b586ff-snjhj\" (UID: \"6dc084a0-be89-4371-92a3-181cfe1979ce\") " pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:42 crc kubenswrapper[4836]: I0217 14:31:42.694758 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:43 crc kubenswrapper[4836]: I0217 14:31:43.419430 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-snjhj"] Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.069326 4836 generic.go:334] "Generic (PLEG): container finished" podID="68b35111-581a-4e2e-9fae-3e0248674655" containerID="f38b096e0cb174cc64328d45e0488ca7d9888e5f9a99f36292e400cf1d4fd13d" exitCode=0 Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.069725 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerDied","Data":"f38b096e0cb174cc64328d45e0488ca7d9888e5f9a99f36292e400cf1d4fd13d"} Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.071881 4836 generic.go:334] "Generic (PLEG): container finished" podID="6dc084a0-be89-4371-92a3-181cfe1979ce" containerID="390fa4ca6b8979533f6405e113ef2079e208fe0693fc6d17b0be00a546b8f4a6" exitCode=0 Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.072468 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" event={"ID":"6dc084a0-be89-4371-92a3-181cfe1979ce","Type":"ContainerDied","Data":"390fa4ca6b8979533f6405e113ef2079e208fe0693fc6d17b0be00a546b8f4a6"} Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.072571 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" event={"ID":"6dc084a0-be89-4371-92a3-181cfe1979ce","Type":"ContainerStarted","Data":"afc5626684a403ed544c1a7eb27331441a69058c5e8e4168ae08e5ba526f6680"} Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.283003 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.283550 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.367949 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.493797 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qclhz\" (UniqueName: \"kubernetes.io/projected/68b35111-581a-4e2e-9fae-3e0248674655-kube-api-access-qclhz\") pod \"68b35111-581a-4e2e-9fae-3e0248674655\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.493885 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-run-httpd\") pod \"68b35111-581a-4e2e-9fae-3e0248674655\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.493910 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-log-httpd\") pod \"68b35111-581a-4e2e-9fae-3e0248674655\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.493971 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-scripts\") pod \"68b35111-581a-4e2e-9fae-3e0248674655\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.494347 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-config-data\") pod \"68b35111-581a-4e2e-9fae-3e0248674655\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.494613 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-combined-ca-bundle\") pod \"68b35111-581a-4e2e-9fae-3e0248674655\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.494765 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-sg-core-conf-yaml\") pod \"68b35111-581a-4e2e-9fae-3e0248674655\" (UID: \"68b35111-581a-4e2e-9fae-3e0248674655\") " Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.495250 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "68b35111-581a-4e2e-9fae-3e0248674655" (UID: "68b35111-581a-4e2e-9fae-3e0248674655"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.495616 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "68b35111-581a-4e2e-9fae-3e0248674655" (UID: "68b35111-581a-4e2e-9fae-3e0248674655"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.496461 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.496495 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68b35111-581a-4e2e-9fae-3e0248674655-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.501544 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-scripts" (OuterVolumeSpecName: "scripts") pod "68b35111-581a-4e2e-9fae-3e0248674655" (UID: "68b35111-581a-4e2e-9fae-3e0248674655"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.517526 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b35111-581a-4e2e-9fae-3e0248674655-kube-api-access-qclhz" (OuterVolumeSpecName: "kube-api-access-qclhz") pod "68b35111-581a-4e2e-9fae-3e0248674655" (UID: "68b35111-581a-4e2e-9fae-3e0248674655"). InnerVolumeSpecName "kube-api-access-qclhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.576266 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "68b35111-581a-4e2e-9fae-3e0248674655" (UID: "68b35111-581a-4e2e-9fae-3e0248674655"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.598573 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.598731 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qclhz\" (UniqueName: \"kubernetes.io/projected/68b35111-581a-4e2e-9fae-3e0248674655-kube-api-access-qclhz\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.598794 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.675582 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68b35111-581a-4e2e-9fae-3e0248674655" (UID: "68b35111-581a-4e2e-9fae-3e0248674655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.702344 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.706446 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-config-data" (OuterVolumeSpecName: "config-data") pod "68b35111-581a-4e2e-9fae-3e0248674655" (UID: "68b35111-581a-4e2e-9fae-3e0248674655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:44 crc kubenswrapper[4836]: I0217 14:31:44.804650 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b35111-581a-4e2e-9fae-3e0248674655-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.057013 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.085102 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" event={"ID":"6dc084a0-be89-4371-92a3-181cfe1979ce","Type":"ContainerStarted","Data":"3d8cdfd5c39de98f01d9d2493ccdc3254e68c41faca9b0db871f1b8eb0b67eed"} Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.086566 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.089188 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68b35111-581a-4e2e-9fae-3e0248674655","Type":"ContainerDied","Data":"ae9ad801c19207ea76afd61356edd8b9b7b66ec00cfa25e818baaceb869c4ad6"} Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.089257 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.089263 4836 scope.go:117] "RemoveContainer" containerID="bd01937315b50abe24d9b50b0b169dd1610243cceeeeb3bd7fb5017e3522ad76" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.119104 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" podStartSLOduration=3.119083422 podStartE2EDuration="3.119083422s" podCreationTimestamp="2026-02-17 14:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:45.115878585 +0000 UTC m=+1531.458806874" watchObservedRunningTime="2026-02-17 14:31:45.119083422 +0000 UTC m=+1531.462011691" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.134454 4836 scope.go:117] "RemoveContainer" containerID="b296d87fd7976e77f9e6129f3c2eff1cfb26664d576e803cadb9560994f76f7d" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.157374 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.165570 4836 scope.go:117] "RemoveContainer" containerID="f38b096e0cb174cc64328d45e0488ca7d9888e5f9a99f36292e400cf1d4fd13d" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.177741 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.209389 4836 scope.go:117] "RemoveContainer" containerID="061b81d1777bbf667556b595b5cc63218a0966b7aa3e7690e90b4b84a2173bed" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.231323 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:45 crc kubenswrapper[4836]: E0217 14:31:45.231947 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="ceilometer-central-agent" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.231968 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="ceilometer-central-agent" Feb 17 14:31:45 crc kubenswrapper[4836]: E0217 14:31:45.231993 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="sg-core" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.232000 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="sg-core" Feb 17 14:31:45 crc kubenswrapper[4836]: E0217 14:31:45.232026 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="ceilometer-notification-agent" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.232045 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="ceilometer-notification-agent" Feb 17 14:31:45 crc kubenswrapper[4836]: E0217 14:31:45.232056 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="proxy-httpd" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.232063 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="proxy-httpd" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.232344 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="ceilometer-central-agent" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.232367 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="ceilometer-notification-agent" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.232388 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="sg-core" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.232405 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b35111-581a-4e2e-9fae-3e0248674655" containerName="proxy-httpd" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.234860 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.237711 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.238031 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.238026 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.335731 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.358728 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.359124 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-log" containerID="cri-o://d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e" gracePeriod=30 Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.359379 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-api" containerID="cri-o://6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047" gracePeriod=30 Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.391580 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:45 crc kubenswrapper[4836]: E0217 14:31:45.392984 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-6ps4p log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="7f817058-dcec-4186-bb5e-213ab8d215c0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.419252 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-scripts\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.419420 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.419497 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.419538 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.419612 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ps4p\" (UniqueName: \"kubernetes.io/projected/7f817058-dcec-4186-bb5e-213ab8d215c0-kube-api-access-6ps4p\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.420033 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-config-data\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.420113 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.420202 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.522373 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-config-data\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.522436 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.522501 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.522540 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-scripts\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.522572 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.522610 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.522633 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.522693 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ps4p\" (UniqueName: \"kubernetes.io/projected/7f817058-dcec-4186-bb5e-213ab8d215c0-kube-api-access-6ps4p\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.523761 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-log-httpd\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.524715 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-run-httpd\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.527411 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-scripts\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.528148 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.528732 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-config-data\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.529088 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.530689 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:45 crc kubenswrapper[4836]: I0217 14:31:45.542418 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ps4p\" (UniqueName: \"kubernetes.io/projected/7f817058-dcec-4186-bb5e-213ab8d215c0-kube-api-access-6ps4p\") pod \"ceilometer-0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " pod="openstack/ceilometer-0" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.102350 4836 generic.go:334] "Generic (PLEG): container finished" podID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerID="d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e" exitCode=143 Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.102447 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf284e7d-7c68-4688-9e14-87e9c32f6c41","Type":"ContainerDied","Data":"d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e"} Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.103889 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.118591 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.237463 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-run-httpd\") pod \"7f817058-dcec-4186-bb5e-213ab8d215c0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.237669 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-sg-core-conf-yaml\") pod \"7f817058-dcec-4186-bb5e-213ab8d215c0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.237795 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-config-data\") pod \"7f817058-dcec-4186-bb5e-213ab8d215c0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.237864 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-combined-ca-bundle\") pod \"7f817058-dcec-4186-bb5e-213ab8d215c0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.237857 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f817058-dcec-4186-bb5e-213ab8d215c0" (UID: "7f817058-dcec-4186-bb5e-213ab8d215c0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.237895 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ps4p\" (UniqueName: \"kubernetes.io/projected/7f817058-dcec-4186-bb5e-213ab8d215c0-kube-api-access-6ps4p\") pod \"7f817058-dcec-4186-bb5e-213ab8d215c0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.237953 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-ceilometer-tls-certs\") pod \"7f817058-dcec-4186-bb5e-213ab8d215c0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.238014 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-log-httpd\") pod \"7f817058-dcec-4186-bb5e-213ab8d215c0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.238048 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-scripts\") pod \"7f817058-dcec-4186-bb5e-213ab8d215c0\" (UID: \"7f817058-dcec-4186-bb5e-213ab8d215c0\") " Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.238770 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.239015 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f817058-dcec-4186-bb5e-213ab8d215c0" (UID: "7f817058-dcec-4186-bb5e-213ab8d215c0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.247505 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7f817058-dcec-4186-bb5e-213ab8d215c0" (UID: "7f817058-dcec-4186-bb5e-213ab8d215c0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.247491 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-scripts" (OuterVolumeSpecName: "scripts") pod "7f817058-dcec-4186-bb5e-213ab8d215c0" (UID: "7f817058-dcec-4186-bb5e-213ab8d215c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.247550 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-config-data" (OuterVolumeSpecName: "config-data") pod "7f817058-dcec-4186-bb5e-213ab8d215c0" (UID: "7f817058-dcec-4186-bb5e-213ab8d215c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.248316 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f817058-dcec-4186-bb5e-213ab8d215c0" (UID: "7f817058-dcec-4186-bb5e-213ab8d215c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.249858 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f817058-dcec-4186-bb5e-213ab8d215c0-kube-api-access-6ps4p" (OuterVolumeSpecName: "kube-api-access-6ps4p") pod "7f817058-dcec-4186-bb5e-213ab8d215c0" (UID: "7f817058-dcec-4186-bb5e-213ab8d215c0"). InnerVolumeSpecName "kube-api-access-6ps4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.251717 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f817058-dcec-4186-bb5e-213ab8d215c0" (UID: "7f817058-dcec-4186-bb5e-213ab8d215c0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.340979 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.341482 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.341496 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.341512 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ps4p\" (UniqueName: \"kubernetes.io/projected/7f817058-dcec-4186-bb5e-213ab8d215c0-kube-api-access-6ps4p\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.341528 4836 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.341542 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f817058-dcec-4186-bb5e-213ab8d215c0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.341551 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f817058-dcec-4186-bb5e-213ab8d215c0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:46 crc kubenswrapper[4836]: I0217 14:31:46.582242 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b35111-581a-4e2e-9fae-3e0248674655" path="/var/lib/kubelet/pods/68b35111-581a-4e2e-9fae-3e0248674655/volumes" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.113212 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.182199 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.203502 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.227106 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.230219 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.239583 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.239965 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.240107 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.241620 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.376010 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-run-httpd\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.376498 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-scripts\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.376609 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wvnc\" (UniqueName: \"kubernetes.io/projected/143c175f-4768-4188-8f12-3f76bf70804f-kube-api-access-5wvnc\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.376704 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.376903 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.376960 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-config-data\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.376990 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-log-httpd\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.377169 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.479817 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.479929 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-run-httpd\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.480224 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-scripts\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.480331 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wvnc\" (UniqueName: \"kubernetes.io/projected/143c175f-4768-4188-8f12-3f76bf70804f-kube-api-access-5wvnc\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.480370 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.480516 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.480558 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-config-data\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.480575 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-run-httpd\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.480589 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-log-httpd\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.480987 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-log-httpd\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.487261 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.487425 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.487559 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-config-data\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.488382 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-scripts\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.488659 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.504414 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wvnc\" (UniqueName: \"kubernetes.io/projected/143c175f-4768-4188-8f12-3f76bf70804f-kube-api-access-5wvnc\") pod \"ceilometer-0\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.560842 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:47 crc kubenswrapper[4836]: I0217 14:31:47.809225 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 14:31:48 crc kubenswrapper[4836]: I0217 14:31:48.084274 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:48 crc kubenswrapper[4836]: W0217 14:31:48.088181 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod143c175f_4768_4188_8f12_3f76bf70804f.slice/crio-9dbb838e04f66687c191dcad00147af5a9d0fa9da289d8afa2cdcb370fe12258 WatchSource:0}: Error finding container 9dbb838e04f66687c191dcad00147af5a9d0fa9da289d8afa2cdcb370fe12258: Status 404 returned error can't find the container with id 9dbb838e04f66687c191dcad00147af5a9d0fa9da289d8afa2cdcb370fe12258 Feb 17 14:31:48 crc kubenswrapper[4836]: I0217 14:31:48.143849 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerStarted","Data":"9dbb838e04f66687c191dcad00147af5a9d0fa9da289d8afa2cdcb370fe12258"} Feb 17 14:31:48 crc kubenswrapper[4836]: I0217 14:31:48.253981 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:48 crc kubenswrapper[4836]: I0217 14:31:48.588227 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f817058-dcec-4186-bb5e-213ab8d215c0" path="/var/lib/kubelet/pods/7f817058-dcec-4186-bb5e-213ab8d215c0/volumes" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.118628 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.207724 4836 generic.go:334] "Generic (PLEG): container finished" podID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerID="6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047" exitCode=0 Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.207790 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf284e7d-7c68-4688-9e14-87e9c32f6c41","Type":"ContainerDied","Data":"6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047"} Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.207825 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf284e7d-7c68-4688-9e14-87e9c32f6c41","Type":"ContainerDied","Data":"be3d92134965e9daf67db43dfe412847ed9ee954252aa5228b3fe8a1f26a6f7a"} Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.207868 4836 scope.go:117] "RemoveContainer" containerID="6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.208088 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.243325 4836 scope.go:117] "RemoveContainer" containerID="d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.243632 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-combined-ca-bundle\") pod \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.243673 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf284e7d-7c68-4688-9e14-87e9c32f6c41-logs\") pod \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.243710 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvc2r\" (UniqueName: \"kubernetes.io/projected/cf284e7d-7c68-4688-9e14-87e9c32f6c41-kube-api-access-rvc2r\") pod \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.243880 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-config-data\") pod \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\" (UID: \"cf284e7d-7c68-4688-9e14-87e9c32f6c41\") " Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.245427 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf284e7d-7c68-4688-9e14-87e9c32f6c41-logs" (OuterVolumeSpecName: "logs") pod "cf284e7d-7c68-4688-9e14-87e9c32f6c41" (UID: "cf284e7d-7c68-4688-9e14-87e9c32f6c41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.247268 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf284e7d-7c68-4688-9e14-87e9c32f6c41-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.250807 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf284e7d-7c68-4688-9e14-87e9c32f6c41-kube-api-access-rvc2r" (OuterVolumeSpecName: "kube-api-access-rvc2r") pod "cf284e7d-7c68-4688-9e14-87e9c32f6c41" (UID: "cf284e7d-7c68-4688-9e14-87e9c32f6c41"). InnerVolumeSpecName "kube-api-access-rvc2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.281355 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.281414 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.287799 4836 scope.go:117] "RemoveContainer" containerID="6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047" Feb 17 14:31:49 crc kubenswrapper[4836]: E0217 14:31:49.289099 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047\": container with ID starting with 6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047 not found: ID does not exist" containerID="6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.289153 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047"} err="failed to get container status \"6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047\": rpc error: code = NotFound desc = could not find container \"6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047\": container with ID starting with 6e5e87a2800a9b226a2e0132c7153060336b7daacac50613672b14b4cb72f047 not found: ID does not exist" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.289179 4836 scope.go:117] "RemoveContainer" containerID="d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e" Feb 17 14:31:49 crc kubenswrapper[4836]: E0217 14:31:49.289689 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e\": container with ID starting with d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e not found: ID does not exist" containerID="d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.289717 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e"} err="failed to get container status \"d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e\": rpc error: code = NotFound desc = could not find container \"d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e\": container with ID starting with d2a16e7e8ec6c9424d109c9048a972b614d91e9e7b39aa49c9b987a971f99b2e not found: ID does not exist" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.315044 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-config-data" (OuterVolumeSpecName: "config-data") pod "cf284e7d-7c68-4688-9e14-87e9c32f6c41" (UID: "cf284e7d-7c68-4688-9e14-87e9c32f6c41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.322482 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf284e7d-7c68-4688-9e14-87e9c32f6c41" (UID: "cf284e7d-7c68-4688-9e14-87e9c32f6c41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.350442 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.350538 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvc2r\" (UniqueName: \"kubernetes.io/projected/cf284e7d-7c68-4688-9e14-87e9c32f6c41-kube-api-access-rvc2r\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.350552 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf284e7d-7c68-4688-9e14-87e9c32f6c41-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.569370 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.596382 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.653124 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:49 crc kubenswrapper[4836]: E0217 14:31:49.653787 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-log" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.653843 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-log" Feb 17 14:31:49 crc kubenswrapper[4836]: E0217 14:31:49.653885 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-api" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.653891 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-api" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.654130 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-api" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.654157 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" containerName="nova-api-log" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.657035 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.659463 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.660213 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.660631 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.667459 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.778233 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95da3ac-7563-49bf-a956-b19297cb7d97-logs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.778364 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.778546 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6vzb\" (UniqueName: \"kubernetes.io/projected/c95da3ac-7563-49bf-a956-b19297cb7d97-kube-api-access-m6vzb\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.778835 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.779074 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-config-data\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.779244 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-public-tls-certs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.881260 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-config-data\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.881358 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-public-tls-certs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.881416 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95da3ac-7563-49bf-a956-b19297cb7d97-logs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.881436 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.881474 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6vzb\" (UniqueName: \"kubernetes.io/projected/c95da3ac-7563-49bf-a956-b19297cb7d97-kube-api-access-m6vzb\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.881585 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.882705 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95da3ac-7563-49bf-a956-b19297cb7d97-logs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.892534 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.892633 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-config-data\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.892913 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.894700 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-public-tls-certs\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.901939 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6vzb\" (UniqueName: \"kubernetes.io/projected/c95da3ac-7563-49bf-a956-b19297cb7d97-kube-api-access-m6vzb\") pod \"nova-api-0\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " pod="openstack/nova-api-0" Feb 17 14:31:49 crc kubenswrapper[4836]: I0217 14:31:49.992077 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.057994 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.087990 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.293222 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerStarted","Data":"c5153296780e29125ef218661157bfd30ddc0ca224b65cb9ffd34fbe9f884fe5"} Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.302011 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.302376 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.331658 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.582553 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf284e7d-7c68-4688-9e14-87e9c32f6c41" path="/var/lib/kubelet/pods/cf284e7d-7c68-4688-9e14-87e9c32f6c41/volumes" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.586553 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-h4mlr"] Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.588709 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.591585 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.592101 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.613445 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-h4mlr"] Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.699136 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.702185 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-config-data\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.703001 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.703416 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-scripts\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.703580 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w49k2\" (UniqueName: \"kubernetes.io/projected/079f20c9-f742-4c4b-a8c0-a2a09573bf62-kube-api-access-w49k2\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.807887 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-scripts\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.807949 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w49k2\" (UniqueName: \"kubernetes.io/projected/079f20c9-f742-4c4b-a8c0-a2a09573bf62-kube-api-access-w49k2\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.808114 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-config-data\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.808155 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.815090 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-config-data\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.826728 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-scripts\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.832764 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:50 crc kubenswrapper[4836]: I0217 14:31:50.849121 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w49k2\" (UniqueName: \"kubernetes.io/projected/079f20c9-f742-4c4b-a8c0-a2a09573bf62-kube-api-access-w49k2\") pod \"nova-cell1-cell-mapping-h4mlr\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:51 crc kubenswrapper[4836]: I0217 14:31:51.033172 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:31:51 crc kubenswrapper[4836]: I0217 14:31:51.320131 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c95da3ac-7563-49bf-a956-b19297cb7d97","Type":"ContainerStarted","Data":"423c24607bf9f3ae8ddccd308e2f900af69eca84ed6108aea143a2d2240bf369"} Feb 17 14:31:51 crc kubenswrapper[4836]: I0217 14:31:51.320667 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c95da3ac-7563-49bf-a956-b19297cb7d97","Type":"ContainerStarted","Data":"f3759853b325832fd4c44c00a3e3389086733322b67a29ca459e1ee11b19dfb7"} Feb 17 14:31:51 crc kubenswrapper[4836]: I0217 14:31:51.352332 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerStarted","Data":"b5230a56f72b92f59f1479439851803ed41a8e0044ef00c9559a2e9a8714d70a"} Feb 17 14:31:51 crc kubenswrapper[4836]: I0217 14:31:51.352394 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerStarted","Data":"646a0ce4d819762c0c29a5b108e6c0da6c863c98cc2e8398dd5c431ab5025ec7"} Feb 17 14:31:51 crc kubenswrapper[4836]: I0217 14:31:51.699929 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-h4mlr"] Feb 17 14:31:52 crc kubenswrapper[4836]: I0217 14:31:52.372130 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c95da3ac-7563-49bf-a956-b19297cb7d97","Type":"ContainerStarted","Data":"506f661fc5a6b974ebc362273c2cdd8a27145169f4fc426e732aad89c2e734ed"} Feb 17 14:31:52 crc kubenswrapper[4836]: I0217 14:31:52.376620 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h4mlr" event={"ID":"079f20c9-f742-4c4b-a8c0-a2a09573bf62","Type":"ContainerStarted","Data":"6d24e9f78b938b24616765924395f09dc01b17f432bd2a5ca96dd30f763b95e2"} Feb 17 14:31:52 crc kubenswrapper[4836]: I0217 14:31:52.376674 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h4mlr" event={"ID":"079f20c9-f742-4c4b-a8c0-a2a09573bf62","Type":"ContainerStarted","Data":"04c119ecda97e9ecffd6aff4094f5b50acfbc974345595ef1ecf805ee73c0e65"} Feb 17 14:31:52 crc kubenswrapper[4836]: I0217 14:31:52.403242 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.403202288 podStartE2EDuration="3.403202288s" podCreationTimestamp="2026-02-17 14:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:52.39440157 +0000 UTC m=+1538.737329859" watchObservedRunningTime="2026-02-17 14:31:52.403202288 +0000 UTC m=+1538.746130557" Feb 17 14:31:52 crc kubenswrapper[4836]: I0217 14:31:52.427260 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-h4mlr" podStartSLOduration=2.427234039 podStartE2EDuration="2.427234039s" podCreationTimestamp="2026-02-17 14:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:31:52.415279606 +0000 UTC m=+1538.758207885" watchObservedRunningTime="2026-02-17 14:31:52.427234039 +0000 UTC m=+1538.770162308" Feb 17 14:31:52 crc kubenswrapper[4836]: I0217 14:31:52.697607 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-snjhj" Feb 17 14:31:52 crc kubenswrapper[4836]: I0217 14:31:52.772981 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-cbrcp"] Feb 17 14:31:52 crc kubenswrapper[4836]: I0217 14:31:52.773350 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" podUID="d8b08728-c946-43e4-85fa-0b033034bd26" containerName="dnsmasq-dns" containerID="cri-o://6c739c83cd6c60eccf82cdc83958244ae182a579d8b46273f0f3fb2234b691ec" gracePeriod=10 Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.403281 4836 generic.go:334] "Generic (PLEG): container finished" podID="d8b08728-c946-43e4-85fa-0b033034bd26" containerID="6c739c83cd6c60eccf82cdc83958244ae182a579d8b46273f0f3fb2234b691ec" exitCode=0 Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.403347 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" event={"ID":"d8b08728-c946-43e4-85fa-0b033034bd26","Type":"ContainerDied","Data":"6c739c83cd6c60eccf82cdc83958244ae182a579d8b46273f0f3fb2234b691ec"} Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.419565 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerStarted","Data":"a50d34b1dd358d7c95a4c22b41eaa637c5a3469ea425b6fa6f7b63908874bad7"} Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.420348 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="ceilometer-central-agent" containerID="cri-o://c5153296780e29125ef218661157bfd30ddc0ca224b65cb9ffd34fbe9f884fe5" gracePeriod=30 Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.421014 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="proxy-httpd" containerID="cri-o://a50d34b1dd358d7c95a4c22b41eaa637c5a3469ea425b6fa6f7b63908874bad7" gracePeriod=30 Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.421088 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="sg-core" containerID="cri-o://b5230a56f72b92f59f1479439851803ed41a8e0044ef00c9559a2e9a8714d70a" gracePeriod=30 Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.421141 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="ceilometer-notification-agent" containerID="cri-o://646a0ce4d819762c0c29a5b108e6c0da6c863c98cc2e8398dd5c431ab5025ec7" gracePeriod=30 Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.454634 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.27692359 podStartE2EDuration="6.454606716s" podCreationTimestamp="2026-02-17 14:31:47 +0000 UTC" firstStartedPulling="2026-02-17 14:31:48.096524374 +0000 UTC m=+1534.439452643" lastFinishedPulling="2026-02-17 14:31:52.27420751 +0000 UTC m=+1538.617135769" observedRunningTime="2026-02-17 14:31:53.451984255 +0000 UTC m=+1539.794912534" watchObservedRunningTime="2026-02-17 14:31:53.454606716 +0000 UTC m=+1539.797534985" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.715134 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.846397 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-sb\") pod \"d8b08728-c946-43e4-85fa-0b033034bd26\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.846498 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-swift-storage-0\") pod \"d8b08728-c946-43e4-85fa-0b033034bd26\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.846808 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-nb\") pod \"d8b08728-c946-43e4-85fa-0b033034bd26\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.846891 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-svc\") pod \"d8b08728-c946-43e4-85fa-0b033034bd26\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.846937 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl67m\" (UniqueName: \"kubernetes.io/projected/d8b08728-c946-43e4-85fa-0b033034bd26-kube-api-access-rl67m\") pod \"d8b08728-c946-43e4-85fa-0b033034bd26\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.847339 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-config\") pod \"d8b08728-c946-43e4-85fa-0b033034bd26\" (UID: \"d8b08728-c946-43e4-85fa-0b033034bd26\") " Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.877541 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b08728-c946-43e4-85fa-0b033034bd26-kube-api-access-rl67m" (OuterVolumeSpecName: "kube-api-access-rl67m") pod "d8b08728-c946-43e4-85fa-0b033034bd26" (UID: "d8b08728-c946-43e4-85fa-0b033034bd26"). InnerVolumeSpecName "kube-api-access-rl67m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.918761 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8b08728-c946-43e4-85fa-0b033034bd26" (UID: "d8b08728-c946-43e4-85fa-0b033034bd26"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.925587 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8b08728-c946-43e4-85fa-0b033034bd26" (UID: "d8b08728-c946-43e4-85fa-0b033034bd26"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.941017 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-config" (OuterVolumeSpecName: "config") pod "d8b08728-c946-43e4-85fa-0b033034bd26" (UID: "d8b08728-c946-43e4-85fa-0b033034bd26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.942273 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8b08728-c946-43e4-85fa-0b033034bd26" (UID: "d8b08728-c946-43e4-85fa-0b033034bd26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.957403 4836 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.957461 4836 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.957474 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.957487 4836 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.957498 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl67m\" (UniqueName: \"kubernetes.io/projected/d8b08728-c946-43e4-85fa-0b033034bd26-kube-api-access-rl67m\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:53 crc kubenswrapper[4836]: I0217 14:31:53.963735 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8b08728-c946-43e4-85fa-0b033034bd26" (UID: "d8b08728-c946-43e4-85fa-0b033034bd26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.062994 4836 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b08728-c946-43e4-85fa-0b033034bd26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.435997 4836 generic.go:334] "Generic (PLEG): container finished" podID="143c175f-4768-4188-8f12-3f76bf70804f" containerID="a50d34b1dd358d7c95a4c22b41eaa637c5a3469ea425b6fa6f7b63908874bad7" exitCode=0 Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.436054 4836 generic.go:334] "Generic (PLEG): container finished" podID="143c175f-4768-4188-8f12-3f76bf70804f" containerID="b5230a56f72b92f59f1479439851803ed41a8e0044ef00c9559a2e9a8714d70a" exitCode=2 Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.436068 4836 generic.go:334] "Generic (PLEG): container finished" podID="143c175f-4768-4188-8f12-3f76bf70804f" containerID="646a0ce4d819762c0c29a5b108e6c0da6c863c98cc2e8398dd5c431ab5025ec7" exitCode=0 Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.436068 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerDied","Data":"a50d34b1dd358d7c95a4c22b41eaa637c5a3469ea425b6fa6f7b63908874bad7"} Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.436156 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerDied","Data":"b5230a56f72b92f59f1479439851803ed41a8e0044ef00c9559a2e9a8714d70a"} Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.436170 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerDied","Data":"646a0ce4d819762c0c29a5b108e6c0da6c863c98cc2e8398dd5c431ab5025ec7"} Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.439996 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" event={"ID":"d8b08728-c946-43e4-85fa-0b033034bd26","Type":"ContainerDied","Data":"4a4c76bc357a85c3013a688505b9be4f985a6e124e635443b51b48a2960c2a36"} Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.440070 4836 scope.go:117] "RemoveContainer" containerID="6c739c83cd6c60eccf82cdc83958244ae182a579d8b46273f0f3fb2234b691ec" Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.440075 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-cbrcp" Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.494668 4836 scope.go:117] "RemoveContainer" containerID="10880f8e13f3f6efc6d19c175c05a63fc27f01501a301fd0a28b68afaa946ee2" Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.500099 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-cbrcp"] Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.528952 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-cbrcp"] Feb 17 14:31:54 crc kubenswrapper[4836]: I0217 14:31:54.631251 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b08728-c946-43e4-85fa-0b033034bd26" path="/var/lib/kubelet/pods/d8b08728-c946-43e4-85fa-0b033034bd26/volumes" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.195189 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gczl5"] Feb 17 14:31:55 crc kubenswrapper[4836]: E0217 14:31:55.195789 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b08728-c946-43e4-85fa-0b033034bd26" containerName="init" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.195810 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b08728-c946-43e4-85fa-0b033034bd26" containerName="init" Feb 17 14:31:55 crc kubenswrapper[4836]: E0217 14:31:55.195824 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b08728-c946-43e4-85fa-0b033034bd26" containerName="dnsmasq-dns" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.195834 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b08728-c946-43e4-85fa-0b033034bd26" containerName="dnsmasq-dns" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.196057 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b08728-c946-43e4-85fa-0b033034bd26" containerName="dnsmasq-dns" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.197870 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.210895 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pkv\" (UniqueName: \"kubernetes.io/projected/4cd9d6fa-d7e3-483f-afdf-104754807815-kube-api-access-c8pkv\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.211011 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-utilities\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.211041 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-catalog-content\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.213877 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gczl5"] Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.313339 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-utilities\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.313401 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-catalog-content\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.313568 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pkv\" (UniqueName: \"kubernetes.io/projected/4cd9d6fa-d7e3-483f-afdf-104754807815-kube-api-access-c8pkv\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.314267 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-catalog-content\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.314282 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-utilities\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.350389 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pkv\" (UniqueName: \"kubernetes.io/projected/4cd9d6fa-d7e3-483f-afdf-104754807815-kube-api-access-c8pkv\") pod \"certified-operators-gczl5\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:55 crc kubenswrapper[4836]: I0217 14:31:55.517802 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:31:56 crc kubenswrapper[4836]: I0217 14:31:56.096209 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gczl5"] Feb 17 14:31:56 crc kubenswrapper[4836]: W0217 14:31:56.097806 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cd9d6fa_d7e3_483f_afdf_104754807815.slice/crio-bce52c57a9fcf2c1e5a87f3b45145f1651223562443f585e0fbfde97d5bea9a0 WatchSource:0}: Error finding container bce52c57a9fcf2c1e5a87f3b45145f1651223562443f585e0fbfde97d5bea9a0: Status 404 returned error can't find the container with id bce52c57a9fcf2c1e5a87f3b45145f1651223562443f585e0fbfde97d5bea9a0 Feb 17 14:31:56 crc kubenswrapper[4836]: I0217 14:31:56.466315 4836 generic.go:334] "Generic (PLEG): container finished" podID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerID="d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2" exitCode=0 Feb 17 14:31:56 crc kubenswrapper[4836]: I0217 14:31:56.466381 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gczl5" event={"ID":"4cd9d6fa-d7e3-483f-afdf-104754807815","Type":"ContainerDied","Data":"d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2"} Feb 17 14:31:56 crc kubenswrapper[4836]: I0217 14:31:56.466417 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gczl5" event={"ID":"4cd9d6fa-d7e3-483f-afdf-104754807815","Type":"ContainerStarted","Data":"bce52c57a9fcf2c1e5a87f3b45145f1651223562443f585e0fbfde97d5bea9a0"} Feb 17 14:31:58 crc kubenswrapper[4836]: I0217 14:31:58.493437 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gczl5" event={"ID":"4cd9d6fa-d7e3-483f-afdf-104754807815","Type":"ContainerStarted","Data":"f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947"} Feb 17 14:31:58 crc kubenswrapper[4836]: I0217 14:31:58.500003 4836 generic.go:334] "Generic (PLEG): container finished" podID="143c175f-4768-4188-8f12-3f76bf70804f" containerID="c5153296780e29125ef218661157bfd30ddc0ca224b65cb9ffd34fbe9f884fe5" exitCode=0 Feb 17 14:31:58 crc kubenswrapper[4836]: I0217 14:31:58.500041 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerDied","Data":"c5153296780e29125ef218661157bfd30ddc0ca224b65cb9ffd34fbe9f884fe5"} Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.166687 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.286203 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.288981 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.292785 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.344761 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-scripts\") pod \"143c175f-4768-4188-8f12-3f76bf70804f\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.344947 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-sg-core-conf-yaml\") pod \"143c175f-4768-4188-8f12-3f76bf70804f\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.345100 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-combined-ca-bundle\") pod \"143c175f-4768-4188-8f12-3f76bf70804f\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.345229 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-config-data\") pod \"143c175f-4768-4188-8f12-3f76bf70804f\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.345268 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-log-httpd\") pod \"143c175f-4768-4188-8f12-3f76bf70804f\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.345365 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-run-httpd\") pod \"143c175f-4768-4188-8f12-3f76bf70804f\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.345403 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wvnc\" (UniqueName: \"kubernetes.io/projected/143c175f-4768-4188-8f12-3f76bf70804f-kube-api-access-5wvnc\") pod \"143c175f-4768-4188-8f12-3f76bf70804f\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.345445 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-ceilometer-tls-certs\") pod \"143c175f-4768-4188-8f12-3f76bf70804f\" (UID: \"143c175f-4768-4188-8f12-3f76bf70804f\") " Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.346650 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "143c175f-4768-4188-8f12-3f76bf70804f" (UID: "143c175f-4768-4188-8f12-3f76bf70804f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.346743 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "143c175f-4768-4188-8f12-3f76bf70804f" (UID: "143c175f-4768-4188-8f12-3f76bf70804f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.347755 4836 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.348185 4836 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/143c175f-4768-4188-8f12-3f76bf70804f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.358721 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143c175f-4768-4188-8f12-3f76bf70804f-kube-api-access-5wvnc" (OuterVolumeSpecName: "kube-api-access-5wvnc") pod "143c175f-4768-4188-8f12-3f76bf70804f" (UID: "143c175f-4768-4188-8f12-3f76bf70804f"). InnerVolumeSpecName "kube-api-access-5wvnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.381925 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-scripts" (OuterVolumeSpecName: "scripts") pod "143c175f-4768-4188-8f12-3f76bf70804f" (UID: "143c175f-4768-4188-8f12-3f76bf70804f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.387397 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "143c175f-4768-4188-8f12-3f76bf70804f" (UID: "143c175f-4768-4188-8f12-3f76bf70804f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.424268 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "143c175f-4768-4188-8f12-3f76bf70804f" (UID: "143c175f-4768-4188-8f12-3f76bf70804f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.450541 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "143c175f-4768-4188-8f12-3f76bf70804f" (UID: "143c175f-4768-4188-8f12-3f76bf70804f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.450668 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wvnc\" (UniqueName: \"kubernetes.io/projected/143c175f-4768-4188-8f12-3f76bf70804f-kube-api-access-5wvnc\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.450694 4836 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.450704 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.450713 4836 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.484966 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-config-data" (OuterVolumeSpecName: "config-data") pod "143c175f-4768-4188-8f12-3f76bf70804f" (UID: "143c175f-4768-4188-8f12-3f76bf70804f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.514552 4836 generic.go:334] "Generic (PLEG): container finished" podID="079f20c9-f742-4c4b-a8c0-a2a09573bf62" containerID="6d24e9f78b938b24616765924395f09dc01b17f432bd2a5ca96dd30f763b95e2" exitCode=0 Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.514635 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h4mlr" event={"ID":"079f20c9-f742-4c4b-a8c0-a2a09573bf62","Type":"ContainerDied","Data":"6d24e9f78b938b24616765924395f09dc01b17f432bd2a5ca96dd30f763b95e2"} Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.519567 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"143c175f-4768-4188-8f12-3f76bf70804f","Type":"ContainerDied","Data":"9dbb838e04f66687c191dcad00147af5a9d0fa9da289d8afa2cdcb370fe12258"} Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.519632 4836 scope.go:117] "RemoveContainer" containerID="a50d34b1dd358d7c95a4c22b41eaa637c5a3469ea425b6fa6f7b63908874bad7" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.519860 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.523679 4836 generic.go:334] "Generic (PLEG): container finished" podID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerID="f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947" exitCode=0 Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.526037 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gczl5" event={"ID":"4cd9d6fa-d7e3-483f-afdf-104754807815","Type":"ContainerDied","Data":"f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947"} Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.549728 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.553079 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.553107 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143c175f-4768-4188-8f12-3f76bf70804f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.633343 4836 scope.go:117] "RemoveContainer" containerID="b5230a56f72b92f59f1479439851803ed41a8e0044ef00c9559a2e9a8714d70a" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.639175 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.658689 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.700150 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:59 crc kubenswrapper[4836]: E0217 14:31:59.709345 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="proxy-httpd" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.709405 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="proxy-httpd" Feb 17 14:31:59 crc kubenswrapper[4836]: E0217 14:31:59.709491 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="sg-core" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.709506 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="sg-core" Feb 17 14:31:59 crc kubenswrapper[4836]: E0217 14:31:59.709522 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="ceilometer-central-agent" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.709536 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="ceilometer-central-agent" Feb 17 14:31:59 crc kubenswrapper[4836]: E0217 14:31:59.709562 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="ceilometer-notification-agent" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.709575 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="ceilometer-notification-agent" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.711440 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="ceilometer-central-agent" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.711468 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="sg-core" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.711479 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="proxy-httpd" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.711500 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="143c175f-4768-4188-8f12-3f76bf70804f" containerName="ceilometer-notification-agent" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.713888 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.726962 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.727153 4836 scope.go:117] "RemoveContainer" containerID="646a0ce4d819762c0c29a5b108e6c0da6c863c98cc2e8398dd5c431ab5025ec7" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.727773 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.727906 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.728164 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.759352 4836 scope.go:117] "RemoveContainer" containerID="c5153296780e29125ef218661157bfd30ddc0ca224b65cb9ffd34fbe9f884fe5" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.861063 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.862423 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.862594 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ddcf30e-7916-4b59-8986-a5d2c218170e-log-httpd\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.862725 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-scripts\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.862825 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ddcf30e-7916-4b59-8986-a5d2c218170e-run-httpd\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.862964 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.863106 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhqt\" (UniqueName: \"kubernetes.io/projected/1ddcf30e-7916-4b59-8986-a5d2c218170e-kube-api-access-qrhqt\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.863215 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-config-data\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.966408 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-scripts\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.966699 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ddcf30e-7916-4b59-8986-a5d2c218170e-run-httpd\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.966826 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.967022 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhqt\" (UniqueName: \"kubernetes.io/projected/1ddcf30e-7916-4b59-8986-a5d2c218170e-kube-api-access-qrhqt\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.967104 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-config-data\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.967384 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.968067 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ddcf30e-7916-4b59-8986-a5d2c218170e-run-httpd\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.968085 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.968360 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ddcf30e-7916-4b59-8986-a5d2c218170e-log-httpd\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.968967 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ddcf30e-7916-4b59-8986-a5d2c218170e-log-httpd\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.972364 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.972410 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-config-data\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.981431 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.985651 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-scripts\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.987037 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcf30e-7916-4b59-8986-a5d2c218170e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.992123 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhqt\" (UniqueName: \"kubernetes.io/projected/1ddcf30e-7916-4b59-8986-a5d2c218170e-kube-api-access-qrhqt\") pod \"ceilometer-0\" (UID: \"1ddcf30e-7916-4b59-8986-a5d2c218170e\") " pod="openstack/ceilometer-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.995536 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 14:31:59 crc kubenswrapper[4836]: I0217 14:31:59.996190 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 14:32:00 crc kubenswrapper[4836]: I0217 14:32:00.044692 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 14:32:00 crc kubenswrapper[4836]: I0217 14:32:00.539001 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gczl5" event={"ID":"4cd9d6fa-d7e3-483f-afdf-104754807815","Type":"ContainerStarted","Data":"53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef"} Feb 17 14:32:00 crc kubenswrapper[4836]: I0217 14:32:00.575854 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gczl5" podStartSLOduration=2.040008613 podStartE2EDuration="5.575823025s" podCreationTimestamp="2026-02-17 14:31:55 +0000 UTC" firstStartedPulling="2026-02-17 14:31:56.468734293 +0000 UTC m=+1542.811662562" lastFinishedPulling="2026-02-17 14:32:00.004548705 +0000 UTC m=+1546.347476974" observedRunningTime="2026-02-17 14:32:00.562964797 +0000 UTC m=+1546.905893066" watchObservedRunningTime="2026-02-17 14:32:00.575823025 +0000 UTC m=+1546.918751294" Feb 17 14:32:00 crc kubenswrapper[4836]: I0217 14:32:00.589952 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143c175f-4768-4188-8f12-3f76bf70804f" path="/var/lib/kubelet/pods/143c175f-4768-4188-8f12-3f76bf70804f/volumes" Feb 17 14:32:00 crc kubenswrapper[4836]: I0217 14:32:00.608098 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.013653 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.014442 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.088665 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.204652 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-scripts\") pod \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.204752 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-config-data\") pod \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.204984 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w49k2\" (UniqueName: \"kubernetes.io/projected/079f20c9-f742-4c4b-a8c0-a2a09573bf62-kube-api-access-w49k2\") pod \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.205101 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-combined-ca-bundle\") pod \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\" (UID: \"079f20c9-f742-4c4b-a8c0-a2a09573bf62\") " Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.234156 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079f20c9-f742-4c4b-a8c0-a2a09573bf62-kube-api-access-w49k2" (OuterVolumeSpecName: "kube-api-access-w49k2") pod "079f20c9-f742-4c4b-a8c0-a2a09573bf62" (UID: "079f20c9-f742-4c4b-a8c0-a2a09573bf62"). InnerVolumeSpecName "kube-api-access-w49k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.234954 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-scripts" (OuterVolumeSpecName: "scripts") pod "079f20c9-f742-4c4b-a8c0-a2a09573bf62" (UID: "079f20c9-f742-4c4b-a8c0-a2a09573bf62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.284385 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "079f20c9-f742-4c4b-a8c0-a2a09573bf62" (UID: "079f20c9-f742-4c4b-a8c0-a2a09573bf62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.284554 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-config-data" (OuterVolumeSpecName: "config-data") pod "079f20c9-f742-4c4b-a8c0-a2a09573bf62" (UID: "079f20c9-f742-4c4b-a8c0-a2a09573bf62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.311131 4836 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.311199 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.311213 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w49k2\" (UniqueName: \"kubernetes.io/projected/079f20c9-f742-4c4b-a8c0-a2a09573bf62-kube-api-access-w49k2\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.311228 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079f20c9-f742-4c4b-a8c0-a2a09573bf62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.564904 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h4mlr" event={"ID":"079f20c9-f742-4c4b-a8c0-a2a09573bf62","Type":"ContainerDied","Data":"04c119ecda97e9ecffd6aff4094f5b50acfbc974345595ef1ecf805ee73c0e65"} Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.565327 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04c119ecda97e9ecffd6aff4094f5b50acfbc974345595ef1ecf805ee73c0e65" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.564959 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h4mlr" Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.568352 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ddcf30e-7916-4b59-8986-a5d2c218170e","Type":"ContainerStarted","Data":"87ca4914522ee72ea747870e81b062f1e0a07903072c8b9301031784cbe78eee"} Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.762679 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.763410 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-log" containerID="cri-o://423c24607bf9f3ae8ddccd308e2f900af69eca84ed6108aea143a2d2240bf369" gracePeriod=30 Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.763586 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-api" containerID="cri-o://506f661fc5a6b974ebc362273c2cdd8a27145169f4fc426e732aad89c2e734ed" gracePeriod=30 Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.778570 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.778917 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1853ac32-f733-4d5f-9cc2-edf83a927b28" containerName="nova-scheduler-scheduler" containerID="cri-o://cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1" gracePeriod=30 Feb 17 14:32:01 crc kubenswrapper[4836]: I0217 14:32:01.823714 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:32:01 crc kubenswrapper[4836]: E0217 14:32:01.825935 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 14:32:01 crc kubenswrapper[4836]: E0217 14:32:01.829227 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 14:32:01 crc kubenswrapper[4836]: E0217 14:32:01.831136 4836 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 14:32:01 crc kubenswrapper[4836]: E0217 14:32:01.831196 4836 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1853ac32-f733-4d5f-9cc2-edf83a927b28" containerName="nova-scheduler-scheduler" Feb 17 14:32:02 crc kubenswrapper[4836]: I0217 14:32:02.586718 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ddcf30e-7916-4b59-8986-a5d2c218170e","Type":"ContainerStarted","Data":"94aac7de459f3e7ae7761b0e4dbad73203c8083dca82a26ff7b92d1e673bbaff"} Feb 17 14:32:02 crc kubenswrapper[4836]: I0217 14:32:02.587466 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ddcf30e-7916-4b59-8986-a5d2c218170e","Type":"ContainerStarted","Data":"ff5c43e04f82a1d6177f770d5c348df5177eeaae920a5717f5932754f70a1fa3"} Feb 17 14:32:02 crc kubenswrapper[4836]: I0217 14:32:02.588947 4836 generic.go:334] "Generic (PLEG): container finished" podID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerID="423c24607bf9f3ae8ddccd308e2f900af69eca84ed6108aea143a2d2240bf369" exitCode=143 Feb 17 14:32:02 crc kubenswrapper[4836]: I0217 14:32:02.589155 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c95da3ac-7563-49bf-a956-b19297cb7d97","Type":"ContainerDied","Data":"423c24607bf9f3ae8ddccd308e2f900af69eca84ed6108aea143a2d2240bf369"} Feb 17 14:32:02 crc kubenswrapper[4836]: I0217 14:32:02.589277 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-log" containerID="cri-o://e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e" gracePeriod=30 Feb 17 14:32:02 crc kubenswrapper[4836]: I0217 14:32:02.589837 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-metadata" containerID="cri-o://a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3" gracePeriod=30 Feb 17 14:32:03 crc kubenswrapper[4836]: I0217 14:32:03.614038 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ddcf30e-7916-4b59-8986-a5d2c218170e","Type":"ContainerStarted","Data":"394bce71060f461459b97cd132dac0a4ab8421cf47e610b6fff4a930aefe8c38"} Feb 17 14:32:03 crc kubenswrapper[4836]: I0217 14:32:03.617804 4836 generic.go:334] "Generic (PLEG): container finished" podID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerID="e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e" exitCode=143 Feb 17 14:32:03 crc kubenswrapper[4836]: I0217 14:32:03.617845 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"327aaf35-8278-4f1a-b369-7a40209c0a8e","Type":"ContainerDied","Data":"e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e"} Feb 17 14:32:05 crc kubenswrapper[4836]: I0217 14:32:05.518476 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:32:05 crc kubenswrapper[4836]: I0217 14:32:05.519576 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:32:05 crc kubenswrapper[4836]: I0217 14:32:05.646989 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ddcf30e-7916-4b59-8986-a5d2c218170e","Type":"ContainerStarted","Data":"f2c5f59a3ea0bb2c698d106db28ebcc901db443392a5bbd1ae4a3f5e393b59c6"} Feb 17 14:32:05 crc kubenswrapper[4836]: I0217 14:32:05.647856 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 14:32:05 crc kubenswrapper[4836]: I0217 14:32:05.669829 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.913155196 podStartE2EDuration="6.669801126s" podCreationTimestamp="2026-02-17 14:31:59 +0000 UTC" firstStartedPulling="2026-02-17 14:32:00.610118595 +0000 UTC m=+1546.953046864" lastFinishedPulling="2026-02-17 14:32:04.366764525 +0000 UTC m=+1550.709692794" observedRunningTime="2026-02-17 14:32:05.66811048 +0000 UTC m=+1552.011038749" watchObservedRunningTime="2026-02-17 14:32:05.669801126 +0000 UTC m=+1552.012729395" Feb 17 14:32:05 crc kubenswrapper[4836]: I0217 14:32:05.728634 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": read tcp 10.217.0.2:59416->10.217.0.225:8775: read: connection reset by peer" Feb 17 14:32:05 crc kubenswrapper[4836]: I0217 14:32:05.728987 4836 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": read tcp 10.217.0.2:59430->10.217.0.225:8775: read: connection reset by peer" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.397582 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.537223 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-config-data\") pod \"327aaf35-8278-4f1a-b369-7a40209c0a8e\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.537287 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-combined-ca-bundle\") pod \"327aaf35-8278-4f1a-b369-7a40209c0a8e\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.537550 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/327aaf35-8278-4f1a-b369-7a40209c0a8e-logs\") pod \"327aaf35-8278-4f1a-b369-7a40209c0a8e\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.537604 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnlb4\" (UniqueName: \"kubernetes.io/projected/327aaf35-8278-4f1a-b369-7a40209c0a8e-kube-api-access-mnlb4\") pod \"327aaf35-8278-4f1a-b369-7a40209c0a8e\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.537743 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-nova-metadata-tls-certs\") pod \"327aaf35-8278-4f1a-b369-7a40209c0a8e\" (UID: \"327aaf35-8278-4f1a-b369-7a40209c0a8e\") " Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.539892 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/327aaf35-8278-4f1a-b369-7a40209c0a8e-logs" (OuterVolumeSpecName: "logs") pod "327aaf35-8278-4f1a-b369-7a40209c0a8e" (UID: "327aaf35-8278-4f1a-b369-7a40209c0a8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.545382 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/327aaf35-8278-4f1a-b369-7a40209c0a8e-kube-api-access-mnlb4" (OuterVolumeSpecName: "kube-api-access-mnlb4") pod "327aaf35-8278-4f1a-b369-7a40209c0a8e" (UID: "327aaf35-8278-4f1a-b369-7a40209c0a8e"). InnerVolumeSpecName "kube-api-access-mnlb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.580103 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gczl5" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="registry-server" probeResult="failure" output=< Feb 17 14:32:06 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:32:06 crc kubenswrapper[4836]: > Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.582635 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-config-data" (OuterVolumeSpecName: "config-data") pod "327aaf35-8278-4f1a-b369-7a40209c0a8e" (UID: "327aaf35-8278-4f1a-b369-7a40209c0a8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.637804 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "327aaf35-8278-4f1a-b369-7a40209c0a8e" (UID: "327aaf35-8278-4f1a-b369-7a40209c0a8e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.641588 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.643339 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/327aaf35-8278-4f1a-b369-7a40209c0a8e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.643810 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnlb4\" (UniqueName: \"kubernetes.io/projected/327aaf35-8278-4f1a-b369-7a40209c0a8e-kube-api-access-mnlb4\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.643906 4836 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.643990 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.643506 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "327aaf35-8278-4f1a-b369-7a40209c0a8e" (UID: "327aaf35-8278-4f1a-b369-7a40209c0a8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.672829 4836 generic.go:334] "Generic (PLEG): container finished" podID="1853ac32-f733-4d5f-9cc2-edf83a927b28" containerID="cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1" exitCode=0 Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.673076 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.673845 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1853ac32-f733-4d5f-9cc2-edf83a927b28","Type":"ContainerDied","Data":"cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1"} Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.673910 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1853ac32-f733-4d5f-9cc2-edf83a927b28","Type":"ContainerDied","Data":"942beb1e7c7f15eec98870c7e5614fc6da7b9d580327cb6b8b021c40fa96a882"} Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.673938 4836 scope.go:117] "RemoveContainer" containerID="cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.694705 4836 generic.go:334] "Generic (PLEG): container finished" podID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerID="a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3" exitCode=0 Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.694841 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"327aaf35-8278-4f1a-b369-7a40209c0a8e","Type":"ContainerDied","Data":"a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3"} Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.694940 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"327aaf35-8278-4f1a-b369-7a40209c0a8e","Type":"ContainerDied","Data":"f0a3643ff133a4988442a790112d5d5fb30bb8fc7d8f8119a27a6cf6da2e8bfc"} Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.697317 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.721714 4836 scope.go:117] "RemoveContainer" containerID="cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1" Feb 17 14:32:06 crc kubenswrapper[4836]: E0217 14:32:06.722745 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1\": container with ID starting with cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1 not found: ID does not exist" containerID="cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.722947 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1"} err="failed to get container status \"cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1\": rpc error: code = NotFound desc = could not find container \"cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1\": container with ID starting with cdcb8668f9c38464113928fab0592c4bc426536aaf26770f316b86d52057f1a1 not found: ID does not exist" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.723060 4836 scope.go:117] "RemoveContainer" containerID="a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.745159 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-combined-ca-bundle\") pod \"1853ac32-f733-4d5f-9cc2-edf83a927b28\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.745313 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbrvs\" (UniqueName: \"kubernetes.io/projected/1853ac32-f733-4d5f-9cc2-edf83a927b28-kube-api-access-lbrvs\") pod \"1853ac32-f733-4d5f-9cc2-edf83a927b28\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.745441 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-config-data\") pod \"1853ac32-f733-4d5f-9cc2-edf83a927b28\" (UID: \"1853ac32-f733-4d5f-9cc2-edf83a927b28\") " Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.746426 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/327aaf35-8278-4f1a-b369-7a40209c0a8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.749610 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1853ac32-f733-4d5f-9cc2-edf83a927b28-kube-api-access-lbrvs" (OuterVolumeSpecName: "kube-api-access-lbrvs") pod "1853ac32-f733-4d5f-9cc2-edf83a927b28" (UID: "1853ac32-f733-4d5f-9cc2-edf83a927b28"). InnerVolumeSpecName "kube-api-access-lbrvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.780402 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.782286 4836 scope.go:117] "RemoveContainer" containerID="e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.794568 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.808665 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.809414 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1853ac32-f733-4d5f-9cc2-edf83a927b28" (UID: "1853ac32-f733-4d5f-9cc2-edf83a927b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:06 crc kubenswrapper[4836]: E0217 14:32:06.810685 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-metadata" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.810711 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-metadata" Feb 17 14:32:06 crc kubenswrapper[4836]: E0217 14:32:06.810734 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1853ac32-f733-4d5f-9cc2-edf83a927b28" containerName="nova-scheduler-scheduler" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.810743 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="1853ac32-f733-4d5f-9cc2-edf83a927b28" containerName="nova-scheduler-scheduler" Feb 17 14:32:06 crc kubenswrapper[4836]: E0217 14:32:06.810802 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-log" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.810812 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-log" Feb 17 14:32:06 crc kubenswrapper[4836]: E0217 14:32:06.810825 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079f20c9-f742-4c4b-a8c0-a2a09573bf62" containerName="nova-manage" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.810835 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="079f20c9-f742-4c4b-a8c0-a2a09573bf62" containerName="nova-manage" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.811128 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="079f20c9-f742-4c4b-a8c0-a2a09573bf62" containerName="nova-manage" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.811828 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="1853ac32-f733-4d5f-9cc2-edf83a927b28" containerName="nova-scheduler-scheduler" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.811914 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-log" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.811932 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" containerName="nova-metadata-metadata" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.819458 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.826224 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.828073 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.828197 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.839634 4836 scope.go:117] "RemoveContainer" containerID="a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3" Feb 17 14:32:06 crc kubenswrapper[4836]: E0217 14:32:06.842691 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3\": container with ID starting with a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3 not found: ID does not exist" containerID="a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.842939 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3"} err="failed to get container status \"a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3\": rpc error: code = NotFound desc = could not find container \"a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3\": container with ID starting with a23d238ccfddfc1e73a017a79f41c992e73a86ed04c0afd30e536b5ddc8892f3 not found: ID does not exist" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.843058 4836 scope.go:117] "RemoveContainer" containerID="e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e" Feb 17 14:32:06 crc kubenswrapper[4836]: E0217 14:32:06.844143 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e\": container with ID starting with e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e not found: ID does not exist" containerID="e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.844373 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e"} err="failed to get container status \"e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e\": rpc error: code = NotFound desc = could not find container \"e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e\": container with ID starting with e6a160448c863ff6f5997675db3bca2b01a51a67e8fdbdf7026c37112391dd6e not found: ID does not exist" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.850636 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpstt\" (UniqueName: \"kubernetes.io/projected/c56150e0-07ff-4a45-9231-26fa261942c4-kube-api-access-wpstt\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.851017 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56150e0-07ff-4a45-9231-26fa261942c4-logs\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.851287 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-config-data\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.852163 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.852356 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.852513 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.852781 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbrvs\" (UniqueName: \"kubernetes.io/projected/1853ac32-f733-4d5f-9cc2-edf83a927b28-kube-api-access-lbrvs\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.854545 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-config-data" (OuterVolumeSpecName: "config-data") pod "1853ac32-f733-4d5f-9cc2-edf83a927b28" (UID: "1853ac32-f733-4d5f-9cc2-edf83a927b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.954288 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpstt\" (UniqueName: \"kubernetes.io/projected/c56150e0-07ff-4a45-9231-26fa261942c4-kube-api-access-wpstt\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.954392 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56150e0-07ff-4a45-9231-26fa261942c4-logs\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.954470 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-config-data\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.954538 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.954566 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.954655 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1853ac32-f733-4d5f-9cc2-edf83a927b28-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.955183 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56150e0-07ff-4a45-9231-26fa261942c4-logs\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.958329 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.965053 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-config-data\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.965588 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56150e0-07ff-4a45-9231-26fa261942c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:06 crc kubenswrapper[4836]: I0217 14:32:06.980948 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpstt\" (UniqueName: \"kubernetes.io/projected/c56150e0-07ff-4a45-9231-26fa261942c4-kube-api-access-wpstt\") pod \"nova-metadata-0\" (UID: \"c56150e0-07ff-4a45-9231-26fa261942c4\") " pod="openstack/nova-metadata-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.137924 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.223437 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.269491 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.289415 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.291682 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.302009 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.311409 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.484233 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-config-data\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.484398 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.484458 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrm6m\" (UniqueName: \"kubernetes.io/projected/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-kube-api-access-nrm6m\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.586621 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.586946 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrm6m\" (UniqueName: \"kubernetes.io/projected/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-kube-api-access-nrm6m\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.587066 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-config-data\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.593586 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-config-data\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.595382 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.605388 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrm6m\" (UniqueName: \"kubernetes.io/projected/6bfcfdb5-3886-47e2-8e71-33c95dc14e73-kube-api-access-nrm6m\") pod \"nova-scheduler-0\" (UID: \"6bfcfdb5-3886-47e2-8e71-33c95dc14e73\") " pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.678008 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.728039 4836 generic.go:334] "Generic (PLEG): container finished" podID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerID="506f661fc5a6b974ebc362273c2cdd8a27145169f4fc426e732aad89c2e734ed" exitCode=0 Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.728153 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c95da3ac-7563-49bf-a956-b19297cb7d97","Type":"ContainerDied","Data":"506f661fc5a6b974ebc362273c2cdd8a27145169f4fc426e732aad89c2e734ed"} Feb 17 14:32:07 crc kubenswrapper[4836]: I0217 14:32:07.865322 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.139277 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.216627 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-internal-tls-certs\") pod \"c95da3ac-7563-49bf-a956-b19297cb7d97\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.216805 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-config-data\") pod \"c95da3ac-7563-49bf-a956-b19297cb7d97\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.216848 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95da3ac-7563-49bf-a956-b19297cb7d97-logs\") pod \"c95da3ac-7563-49bf-a956-b19297cb7d97\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.216883 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-public-tls-certs\") pod \"c95da3ac-7563-49bf-a956-b19297cb7d97\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.217404 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6vzb\" (UniqueName: \"kubernetes.io/projected/c95da3ac-7563-49bf-a956-b19297cb7d97-kube-api-access-m6vzb\") pod \"c95da3ac-7563-49bf-a956-b19297cb7d97\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.219239 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c95da3ac-7563-49bf-a956-b19297cb7d97-logs" (OuterVolumeSpecName: "logs") pod "c95da3ac-7563-49bf-a956-b19297cb7d97" (UID: "c95da3ac-7563-49bf-a956-b19297cb7d97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.226607 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c95da3ac-7563-49bf-a956-b19297cb7d97-kube-api-access-m6vzb" (OuterVolumeSpecName: "kube-api-access-m6vzb") pod "c95da3ac-7563-49bf-a956-b19297cb7d97" (UID: "c95da3ac-7563-49bf-a956-b19297cb7d97"). InnerVolumeSpecName "kube-api-access-m6vzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:32:08 crc kubenswrapper[4836]: W0217 14:32:08.284129 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bfcfdb5_3886_47e2_8e71_33c95dc14e73.slice/crio-9164b01fdf81254ae31a0690c9caa3d8d01063ff6d5527c633a1da310069d90d WatchSource:0}: Error finding container 9164b01fdf81254ae31a0690c9caa3d8d01063ff6d5527c633a1da310069d90d: Status 404 returned error can't find the container with id 9164b01fdf81254ae31a0690c9caa3d8d01063ff6d5527c633a1da310069d90d Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.291052 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.293756 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-config-data" (OuterVolumeSpecName: "config-data") pod "c95da3ac-7563-49bf-a956-b19297cb7d97" (UID: "c95da3ac-7563-49bf-a956-b19297cb7d97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.303409 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c95da3ac-7563-49bf-a956-b19297cb7d97" (UID: "c95da3ac-7563-49bf-a956-b19297cb7d97"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.308975 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c95da3ac-7563-49bf-a956-b19297cb7d97" (UID: "c95da3ac-7563-49bf-a956-b19297cb7d97"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.319352 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-combined-ca-bundle\") pod \"c95da3ac-7563-49bf-a956-b19297cb7d97\" (UID: \"c95da3ac-7563-49bf-a956-b19297cb7d97\") " Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.319744 4836 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.319764 4836 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.319775 4836 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95da3ac-7563-49bf-a956-b19297cb7d97-logs\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.319784 4836 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.319795 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6vzb\" (UniqueName: \"kubernetes.io/projected/c95da3ac-7563-49bf-a956-b19297cb7d97-kube-api-access-m6vzb\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.350458 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c95da3ac-7563-49bf-a956-b19297cb7d97" (UID: "c95da3ac-7563-49bf-a956-b19297cb7d97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.422379 4836 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95da3ac-7563-49bf-a956-b19297cb7d97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.582418 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1853ac32-f733-4d5f-9cc2-edf83a927b28" path="/var/lib/kubelet/pods/1853ac32-f733-4d5f-9cc2-edf83a927b28/volumes" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.583781 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="327aaf35-8278-4f1a-b369-7a40209c0a8e" path="/var/lib/kubelet/pods/327aaf35-8278-4f1a-b369-7a40209c0a8e/volumes" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.760387 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56150e0-07ff-4a45-9231-26fa261942c4","Type":"ContainerStarted","Data":"f36aa1be569d660a4ed5b11eec489dc4a85f8381236ebdb9f5339117b15bc9db"} Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.760454 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56150e0-07ff-4a45-9231-26fa261942c4","Type":"ContainerStarted","Data":"01e84a78b931b1ecbae9c5f18cc01f44e6d6bdf9262c30e38e84ac0e79f4084a"} Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.760469 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c56150e0-07ff-4a45-9231-26fa261942c4","Type":"ContainerStarted","Data":"b7dfaca0a861b9c677b8d5b23f2024ef4060315f27a7aec2434206cbd68b3367"} Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.763892 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c95da3ac-7563-49bf-a956-b19297cb7d97","Type":"ContainerDied","Data":"f3759853b325832fd4c44c00a3e3389086733322b67a29ca459e1ee11b19dfb7"} Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.763948 4836 scope.go:117] "RemoveContainer" containerID="506f661fc5a6b974ebc362273c2cdd8a27145169f4fc426e732aad89c2e734ed" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.764076 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.788616 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bfcfdb5-3886-47e2-8e71-33c95dc14e73","Type":"ContainerStarted","Data":"0b6199a97ea4c0df86dcd9fdc5893ef07b4dd716ba8ffef30b72bdf6e4d5c3fe"} Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.788675 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bfcfdb5-3886-47e2-8e71-33c95dc14e73","Type":"ContainerStarted","Data":"9164b01fdf81254ae31a0690c9caa3d8d01063ff6d5527c633a1da310069d90d"} Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.803459 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.803427273 podStartE2EDuration="2.803427273s" podCreationTimestamp="2026-02-17 14:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:32:08.79444485 +0000 UTC m=+1555.137373139" watchObservedRunningTime="2026-02-17 14:32:08.803427273 +0000 UTC m=+1555.146355542" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.823439 4836 scope.go:117] "RemoveContainer" containerID="423c24607bf9f3ae8ddccd308e2f900af69eca84ed6108aea143a2d2240bf369" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.827384 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.861529 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.863243 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.863218045 podStartE2EDuration="1.863218045s" podCreationTimestamp="2026-02-17 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:32:08.840228492 +0000 UTC m=+1555.183156761" watchObservedRunningTime="2026-02-17 14:32:08.863218045 +0000 UTC m=+1555.206146334" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.904789 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 14:32:08 crc kubenswrapper[4836]: E0217 14:32:08.908831 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-api" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.908925 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-api" Feb 17 14:32:08 crc kubenswrapper[4836]: E0217 14:32:08.908970 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-log" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.908978 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-log" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.910256 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-log" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.910284 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" containerName="nova-api-api" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.915480 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.917547 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.920020 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.922101 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.923056 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.943149 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntvj\" (UniqueName: \"kubernetes.io/projected/a8815111-fe36-4868-b092-2f88255f8f2b-kube-api-access-nntvj\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.943627 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.943792 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-config-data\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.943998 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.944028 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-public-tls-certs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:08 crc kubenswrapper[4836]: I0217 14:32:08.944508 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8815111-fe36-4868-b092-2f88255f8f2b-logs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.045890 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nntvj\" (UniqueName: \"kubernetes.io/projected/a8815111-fe36-4868-b092-2f88255f8f2b-kube-api-access-nntvj\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.045980 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.046040 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-config-data\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.046108 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.046128 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-public-tls-certs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.046227 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8815111-fe36-4868-b092-2f88255f8f2b-logs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.046868 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8815111-fe36-4868-b092-2f88255f8f2b-logs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.054738 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-public-tls-certs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.055504 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.055826 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-config-data\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.056235 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8815111-fe36-4868-b092-2f88255f8f2b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.069965 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nntvj\" (UniqueName: \"kubernetes.io/projected/a8815111-fe36-4868-b092-2f88255f8f2b-kube-api-access-nntvj\") pod \"nova-api-0\" (UID: \"a8815111-fe36-4868-b092-2f88255f8f2b\") " pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.250865 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.762275 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 14:32:09 crc kubenswrapper[4836]: I0217 14:32:09.808210 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8815111-fe36-4868-b092-2f88255f8f2b","Type":"ContainerStarted","Data":"5f03bdb5d30d971ad9d332b69b10a6ab0decf96b5c6d26985596b2a2ea77c8b7"} Feb 17 14:32:10 crc kubenswrapper[4836]: I0217 14:32:10.585106 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c95da3ac-7563-49bf-a956-b19297cb7d97" path="/var/lib/kubelet/pods/c95da3ac-7563-49bf-a956-b19297cb7d97/volumes" Feb 17 14:32:10 crc kubenswrapper[4836]: I0217 14:32:10.829765 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8815111-fe36-4868-b092-2f88255f8f2b","Type":"ContainerStarted","Data":"ab504b71215e8d4d5ca609a1e042996ab7cbd89da94251614b26a4005e6be5e5"} Feb 17 14:32:10 crc kubenswrapper[4836]: I0217 14:32:10.829820 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8815111-fe36-4868-b092-2f88255f8f2b","Type":"ContainerStarted","Data":"74ab7ec7c7960538f983d4287ec205b08238f350ade0a8215a914b2068a69f5d"} Feb 17 14:32:10 crc kubenswrapper[4836]: I0217 14:32:10.865671 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.865377303 podStartE2EDuration="2.865377303s" podCreationTimestamp="2026-02-17 14:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:32:10.850347785 +0000 UTC m=+1557.193276064" watchObservedRunningTime="2026-02-17 14:32:10.865377303 +0000 UTC m=+1557.208305572" Feb 17 14:32:12 crc kubenswrapper[4836]: I0217 14:32:12.139620 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 14:32:12 crc kubenswrapper[4836]: I0217 14:32:12.140018 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 14:32:12 crc kubenswrapper[4836]: I0217 14:32:12.678662 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 14:32:15 crc kubenswrapper[4836]: I0217 14:32:15.574718 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:32:15 crc kubenswrapper[4836]: I0217 14:32:15.632879 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:32:15 crc kubenswrapper[4836]: I0217 14:32:15.828813 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gczl5"] Feb 17 14:32:16 crc kubenswrapper[4836]: I0217 14:32:16.901642 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gczl5" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="registry-server" containerID="cri-o://53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef" gracePeriod=2 Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.139865 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.140320 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.544163 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.678336 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.707274 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8pkv\" (UniqueName: \"kubernetes.io/projected/4cd9d6fa-d7e3-483f-afdf-104754807815-kube-api-access-c8pkv\") pod \"4cd9d6fa-d7e3-483f-afdf-104754807815\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.707467 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-utilities\") pod \"4cd9d6fa-d7e3-483f-afdf-104754807815\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.707630 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-catalog-content\") pod \"4cd9d6fa-d7e3-483f-afdf-104754807815\" (UID: \"4cd9d6fa-d7e3-483f-afdf-104754807815\") " Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.708228 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-utilities" (OuterVolumeSpecName: "utilities") pod "4cd9d6fa-d7e3-483f-afdf-104754807815" (UID: "4cd9d6fa-d7e3-483f-afdf-104754807815"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.715475 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd9d6fa-d7e3-483f-afdf-104754807815-kube-api-access-c8pkv" (OuterVolumeSpecName: "kube-api-access-c8pkv") pod "4cd9d6fa-d7e3-483f-afdf-104754807815" (UID: "4cd9d6fa-d7e3-483f-afdf-104754807815"). InnerVolumeSpecName "kube-api-access-c8pkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.742723 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.763970 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cd9d6fa-d7e3-483f-afdf-104754807815" (UID: "4cd9d6fa-d7e3-483f-afdf-104754807815"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.809781 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.809822 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd9d6fa-d7e3-483f-afdf-104754807815-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.809836 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8pkv\" (UniqueName: \"kubernetes.io/projected/4cd9d6fa-d7e3-483f-afdf-104754807815-kube-api-access-c8pkv\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.915569 4836 generic.go:334] "Generic (PLEG): container finished" podID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerID="53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef" exitCode=0 Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.915657 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gczl5" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.915695 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gczl5" event={"ID":"4cd9d6fa-d7e3-483f-afdf-104754807815","Type":"ContainerDied","Data":"53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef"} Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.915779 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gczl5" event={"ID":"4cd9d6fa-d7e3-483f-afdf-104754807815","Type":"ContainerDied","Data":"bce52c57a9fcf2c1e5a87f3b45145f1651223562443f585e0fbfde97d5bea9a0"} Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.915821 4836 scope.go:117] "RemoveContainer" containerID="53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.953550 4836 scope.go:117] "RemoveContainer" containerID="f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.962818 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gczl5"] Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.965494 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.976725 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gczl5"] Feb 17 14:32:17 crc kubenswrapper[4836]: I0217 14:32:17.990200 4836 scope.go:117] "RemoveContainer" containerID="d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.039838 4836 scope.go:117] "RemoveContainer" containerID="53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef" Feb 17 14:32:18 crc kubenswrapper[4836]: E0217 14:32:18.040574 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef\": container with ID starting with 53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef not found: ID does not exist" containerID="53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.040638 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef"} err="failed to get container status \"53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef\": rpc error: code = NotFound desc = could not find container \"53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef\": container with ID starting with 53b60b98531abff91e0a218cd8150c220b43da5bca3da77286dd345ff07020ef not found: ID does not exist" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.040678 4836 scope.go:117] "RemoveContainer" containerID="f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947" Feb 17 14:32:18 crc kubenswrapper[4836]: E0217 14:32:18.041530 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947\": container with ID starting with f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947 not found: ID does not exist" containerID="f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.041570 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947"} err="failed to get container status \"f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947\": rpc error: code = NotFound desc = could not find container \"f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947\": container with ID starting with f59ee91509ea9e36d15899731572e250cd29f78a13163eb836fae779d30ef947 not found: ID does not exist" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.041593 4836 scope.go:117] "RemoveContainer" containerID="d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2" Feb 17 14:32:18 crc kubenswrapper[4836]: E0217 14:32:18.042031 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2\": container with ID starting with d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2 not found: ID does not exist" containerID="d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.042070 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2"} err="failed to get container status \"d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2\": rpc error: code = NotFound desc = could not find container \"d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2\": container with ID starting with d729bd6482605fb4977c1bf01960378417a9b2e6780614aa5f4dbff52eeda0e2 not found: ID does not exist" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.188908 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c56150e0-07ff-4a45-9231-26fa261942c4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.234:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.189074 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c56150e0-07ff-4a45-9231-26fa261942c4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.234:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:32:18 crc kubenswrapper[4836]: I0217 14:32:18.583183 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" path="/var/lib/kubelet/pods/4cd9d6fa-d7e3-483f-afdf-104754807815/volumes" Feb 17 14:32:19 crc kubenswrapper[4836]: I0217 14:32:19.251384 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 14:32:19 crc kubenswrapper[4836]: I0217 14:32:19.251767 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 14:32:20 crc kubenswrapper[4836]: I0217 14:32:20.264533 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a8815111-fe36-4868-b092-2f88255f8f2b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.236:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:32:20 crc kubenswrapper[4836]: I0217 14:32:20.264525 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a8815111-fe36-4868-b092-2f88255f8f2b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.236:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 14:32:22 crc kubenswrapper[4836]: I0217 14:32:22.336366 4836 scope.go:117] "RemoveContainer" containerID="fa952a578ab7d74e43550d2abf42e1871632978ec68916c0a6508b2ed82226f0" Feb 17 14:32:22 crc kubenswrapper[4836]: I0217 14:32:22.375652 4836 scope.go:117] "RemoveContainer" containerID="99c757b68ed859a793668b56d22b853641589be9aa542f670159f298a8c5ffcd" Feb 17 14:32:22 crc kubenswrapper[4836]: I0217 14:32:22.424414 4836 scope.go:117] "RemoveContainer" containerID="57ea1eebc786d3a8ae12a685cfa802406deab325110c652e436a68a0c258022f" Feb 17 14:32:22 crc kubenswrapper[4836]: I0217 14:32:22.474054 4836 scope.go:117] "RemoveContainer" containerID="b84dd65de54881081222d1401d684becd3ab6f396a5d3ddb1a10e413f4f858e0" Feb 17 14:32:27 crc kubenswrapper[4836]: I0217 14:32:27.146026 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 14:32:27 crc kubenswrapper[4836]: I0217 14:32:27.147873 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 14:32:27 crc kubenswrapper[4836]: I0217 14:32:27.152306 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 14:32:28 crc kubenswrapper[4836]: I0217 14:32:28.066538 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 14:32:29 crc kubenswrapper[4836]: I0217 14:32:29.261886 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 14:32:29 crc kubenswrapper[4836]: I0217 14:32:29.262017 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 14:32:29 crc kubenswrapper[4836]: I0217 14:32:29.263055 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 14:32:29 crc kubenswrapper[4836]: I0217 14:32:29.263256 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 14:32:29 crc kubenswrapper[4836]: I0217 14:32:29.271902 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 14:32:29 crc kubenswrapper[4836]: I0217 14:32:29.275588 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 14:32:30 crc kubenswrapper[4836]: I0217 14:32:30.236551 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.157246 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-79spl"] Feb 17 14:32:47 crc kubenswrapper[4836]: E0217 14:32:47.160054 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="extract-content" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.160169 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="extract-content" Feb 17 14:32:47 crc kubenswrapper[4836]: E0217 14:32:47.160245 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="registry-server" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.160316 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="registry-server" Feb 17 14:32:47 crc kubenswrapper[4836]: E0217 14:32:47.160386 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="extract-utilities" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.160442 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="extract-utilities" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.160920 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd9d6fa-d7e3-483f-afdf-104754807815" containerName="registry-server" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.163097 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.171309 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-79spl"] Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.248767 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-utilities\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.248933 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjml9\" (UniqueName: \"kubernetes.io/projected/349d9039-4cce-4f99-83f3-f12ad111cae1-kube-api-access-hjml9\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.248976 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-catalog-content\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.352198 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-utilities\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.352367 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjml9\" (UniqueName: \"kubernetes.io/projected/349d9039-4cce-4f99-83f3-f12ad111cae1-kube-api-access-hjml9\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.352409 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-catalog-content\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.352727 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-utilities\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.352840 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-catalog-content\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.375791 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjml9\" (UniqueName: \"kubernetes.io/projected/349d9039-4cce-4f99-83f3-f12ad111cae1-kube-api-access-hjml9\") pod \"redhat-marketplace-79spl\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:47 crc kubenswrapper[4836]: I0217 14:32:47.510266 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:48 crc kubenswrapper[4836]: I0217 14:32:48.114977 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-79spl"] Feb 17 14:32:48 crc kubenswrapper[4836]: I0217 14:32:48.291465 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79spl" event={"ID":"349d9039-4cce-4f99-83f3-f12ad111cae1","Type":"ContainerStarted","Data":"a013853d66ffa5e96192c446178dcfb4fc6eec81200e42d20a9171d8c3996fc2"} Feb 17 14:32:49 crc kubenswrapper[4836]: I0217 14:32:49.306233 4836 generic.go:334] "Generic (PLEG): container finished" podID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerID="dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0" exitCode=0 Feb 17 14:32:49 crc kubenswrapper[4836]: I0217 14:32:49.306340 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79spl" event={"ID":"349d9039-4cce-4f99-83f3-f12ad111cae1","Type":"ContainerDied","Data":"dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0"} Feb 17 14:32:49 crc kubenswrapper[4836]: I0217 14:32:49.309654 4836 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:32:51 crc kubenswrapper[4836]: I0217 14:32:51.336261 4836 generic.go:334] "Generic (PLEG): container finished" podID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerID="70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b" exitCode=0 Feb 17 14:32:51 crc kubenswrapper[4836]: I0217 14:32:51.336379 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79spl" event={"ID":"349d9039-4cce-4f99-83f3-f12ad111cae1","Type":"ContainerDied","Data":"70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b"} Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.404844 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79spl" event={"ID":"349d9039-4cce-4f99-83f3-f12ad111cae1","Type":"ContainerStarted","Data":"569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6"} Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.435040 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-79spl" podStartSLOduration=2.907335903 podStartE2EDuration="5.435014371s" podCreationTimestamp="2026-02-17 14:32:47 +0000 UTC" firstStartedPulling="2026-02-17 14:32:49.309275708 +0000 UTC m=+1595.652203977" lastFinishedPulling="2026-02-17 14:32:51.836954176 +0000 UTC m=+1598.179882445" observedRunningTime="2026-02-17 14:32:52.431629129 +0000 UTC m=+1598.774557408" watchObservedRunningTime="2026-02-17 14:32:52.435014371 +0000 UTC m=+1598.777942640" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.506379 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-65gzh"] Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.509025 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.521721 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65gzh"] Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.606413 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-catalog-content\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.606667 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-utilities\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.606953 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8246n\" (UniqueName: \"kubernetes.io/projected/52cf9c20-bb50-4295-a308-add7b717f6ce-kube-api-access-8246n\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.709071 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8246n\" (UniqueName: \"kubernetes.io/projected/52cf9c20-bb50-4295-a308-add7b717f6ce-kube-api-access-8246n\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.709250 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-catalog-content\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.709338 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-utilities\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.709985 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-utilities\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.709992 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-catalog-content\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.737288 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8246n\" (UniqueName: \"kubernetes.io/projected/52cf9c20-bb50-4295-a308-add7b717f6ce-kube-api-access-8246n\") pod \"community-operators-65gzh\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:52 crc kubenswrapper[4836]: I0217 14:32:52.832733 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:32:53 crc kubenswrapper[4836]: I0217 14:32:53.597210 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65gzh"] Feb 17 14:32:54 crc kubenswrapper[4836]: I0217 14:32:54.436243 4836 generic.go:334] "Generic (PLEG): container finished" podID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerID="589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2" exitCode=0 Feb 17 14:32:54 crc kubenswrapper[4836]: I0217 14:32:54.436367 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65gzh" event={"ID":"52cf9c20-bb50-4295-a308-add7b717f6ce","Type":"ContainerDied","Data":"589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2"} Feb 17 14:32:54 crc kubenswrapper[4836]: I0217 14:32:54.436637 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65gzh" event={"ID":"52cf9c20-bb50-4295-a308-add7b717f6ce","Type":"ContainerStarted","Data":"845d9e040e01c21eede646e7791a3eeafb44af87e66b2eea5e7b077448fb458c"} Feb 17 14:32:55 crc kubenswrapper[4836]: I0217 14:32:55.449517 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65gzh" event={"ID":"52cf9c20-bb50-4295-a308-add7b717f6ce","Type":"ContainerStarted","Data":"e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04"} Feb 17 14:32:57 crc kubenswrapper[4836]: I0217 14:32:57.476147 4836 generic.go:334] "Generic (PLEG): container finished" podID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerID="e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04" exitCode=0 Feb 17 14:32:57 crc kubenswrapper[4836]: I0217 14:32:57.477098 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65gzh" event={"ID":"52cf9c20-bb50-4295-a308-add7b717f6ce","Type":"ContainerDied","Data":"e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04"} Feb 17 14:32:57 crc kubenswrapper[4836]: I0217 14:32:57.510465 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:57 crc kubenswrapper[4836]: I0217 14:32:57.510813 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:57 crc kubenswrapper[4836]: I0217 14:32:57.575430 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:58 crc kubenswrapper[4836]: I0217 14:32:58.494220 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65gzh" event={"ID":"52cf9c20-bb50-4295-a308-add7b717f6ce","Type":"ContainerStarted","Data":"ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf"} Feb 17 14:32:58 crc kubenswrapper[4836]: I0217 14:32:58.525360 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-65gzh" podStartSLOduration=3.108142823 podStartE2EDuration="6.525336743s" podCreationTimestamp="2026-02-17 14:32:52 +0000 UTC" firstStartedPulling="2026-02-17 14:32:54.43908251 +0000 UTC m=+1600.782010779" lastFinishedPulling="2026-02-17 14:32:57.85627643 +0000 UTC m=+1604.199204699" observedRunningTime="2026-02-17 14:32:58.522861417 +0000 UTC m=+1604.865789706" watchObservedRunningTime="2026-02-17 14:32:58.525336743 +0000 UTC m=+1604.868265012" Feb 17 14:32:58 crc kubenswrapper[4836]: I0217 14:32:58.558654 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:32:59 crc kubenswrapper[4836]: I0217 14:32:59.765518 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:32:59 crc kubenswrapper[4836]: I0217 14:32:59.765956 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:33:00 crc kubenswrapper[4836]: I0217 14:33:00.698240 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-79spl"] Feb 17 14:33:01 crc kubenswrapper[4836]: I0217 14:33:01.547730 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-79spl" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerName="registry-server" containerID="cri-o://569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6" gracePeriod=2 Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.184895 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.269224 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjml9\" (UniqueName: \"kubernetes.io/projected/349d9039-4cce-4f99-83f3-f12ad111cae1-kube-api-access-hjml9\") pod \"349d9039-4cce-4f99-83f3-f12ad111cae1\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.269417 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-catalog-content\") pod \"349d9039-4cce-4f99-83f3-f12ad111cae1\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.269615 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-utilities\") pod \"349d9039-4cce-4f99-83f3-f12ad111cae1\" (UID: \"349d9039-4cce-4f99-83f3-f12ad111cae1\") " Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.270124 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-utilities" (OuterVolumeSpecName: "utilities") pod "349d9039-4cce-4f99-83f3-f12ad111cae1" (UID: "349d9039-4cce-4f99-83f3-f12ad111cae1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.277732 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349d9039-4cce-4f99-83f3-f12ad111cae1-kube-api-access-hjml9" (OuterVolumeSpecName: "kube-api-access-hjml9") pod "349d9039-4cce-4f99-83f3-f12ad111cae1" (UID: "349d9039-4cce-4f99-83f3-f12ad111cae1"). InnerVolumeSpecName "kube-api-access-hjml9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.291157 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "349d9039-4cce-4f99-83f3-f12ad111cae1" (UID: "349d9039-4cce-4f99-83f3-f12ad111cae1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.372946 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjml9\" (UniqueName: \"kubernetes.io/projected/349d9039-4cce-4f99-83f3-f12ad111cae1-kube-api-access-hjml9\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.372996 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.373009 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349d9039-4cce-4f99-83f3-f12ad111cae1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.560186 4836 generic.go:334] "Generic (PLEG): container finished" podID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerID="569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6" exitCode=0 Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.560292 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79spl" event={"ID":"349d9039-4cce-4f99-83f3-f12ad111cae1","Type":"ContainerDied","Data":"569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6"} Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.560328 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79spl" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.560371 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79spl" event={"ID":"349d9039-4cce-4f99-83f3-f12ad111cae1","Type":"ContainerDied","Data":"a013853d66ffa5e96192c446178dcfb4fc6eec81200e42d20a9171d8c3996fc2"} Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.560402 4836 scope.go:117] "RemoveContainer" containerID="569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.587161 4836 scope.go:117] "RemoveContainer" containerID="70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.610702 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-79spl"] Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.625128 4836 scope.go:117] "RemoveContainer" containerID="dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.626005 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-79spl"] Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.675133 4836 scope.go:117] "RemoveContainer" containerID="569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6" Feb 17 14:33:02 crc kubenswrapper[4836]: E0217 14:33:02.675929 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6\": container with ID starting with 569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6 not found: ID does not exist" containerID="569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.676001 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6"} err="failed to get container status \"569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6\": rpc error: code = NotFound desc = could not find container \"569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6\": container with ID starting with 569a357266f7e1f0fc1cc6f6be5224922001d03792c86e5f281447d8382e40c6 not found: ID does not exist" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.676038 4836 scope.go:117] "RemoveContainer" containerID="70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b" Feb 17 14:33:02 crc kubenswrapper[4836]: E0217 14:33:02.676600 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b\": container with ID starting with 70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b not found: ID does not exist" containerID="70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.676639 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b"} err="failed to get container status \"70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b\": rpc error: code = NotFound desc = could not find container \"70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b\": container with ID starting with 70b11e76751646ba17646df4e29485c533614f909c1dba6960938f572df3db8b not found: ID does not exist" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.676666 4836 scope.go:117] "RemoveContainer" containerID="dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0" Feb 17 14:33:02 crc kubenswrapper[4836]: E0217 14:33:02.677183 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0\": container with ID starting with dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0 not found: ID does not exist" containerID="dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.677222 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0"} err="failed to get container status \"dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0\": rpc error: code = NotFound desc = could not find container \"dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0\": container with ID starting with dd37c66da0b978389c4d358d2f04173d267ab1cfda22f7781de1a6f5ae779bc0 not found: ID does not exist" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.833553 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.833632 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:33:02 crc kubenswrapper[4836]: I0217 14:33:02.897633 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:33:03 crc kubenswrapper[4836]: I0217 14:33:03.619337 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:33:04 crc kubenswrapper[4836]: I0217 14:33:04.580848 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" path="/var/lib/kubelet/pods/349d9039-4cce-4f99-83f3-f12ad111cae1/volumes" Feb 17 14:33:05 crc kubenswrapper[4836]: I0217 14:33:05.100787 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65gzh"] Feb 17 14:33:05 crc kubenswrapper[4836]: I0217 14:33:05.605043 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-65gzh" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerName="registry-server" containerID="cri-o://ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf" gracePeriod=2 Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.285345 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.468779 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8246n\" (UniqueName: \"kubernetes.io/projected/52cf9c20-bb50-4295-a308-add7b717f6ce-kube-api-access-8246n\") pod \"52cf9c20-bb50-4295-a308-add7b717f6ce\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.469163 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-catalog-content\") pod \"52cf9c20-bb50-4295-a308-add7b717f6ce\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.469446 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-utilities\") pod \"52cf9c20-bb50-4295-a308-add7b717f6ce\" (UID: \"52cf9c20-bb50-4295-a308-add7b717f6ce\") " Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.470066 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-utilities" (OuterVolumeSpecName: "utilities") pod "52cf9c20-bb50-4295-a308-add7b717f6ce" (UID: "52cf9c20-bb50-4295-a308-add7b717f6ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.470534 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.485262 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cf9c20-bb50-4295-a308-add7b717f6ce-kube-api-access-8246n" (OuterVolumeSpecName: "kube-api-access-8246n") pod "52cf9c20-bb50-4295-a308-add7b717f6ce" (UID: "52cf9c20-bb50-4295-a308-add7b717f6ce"). InnerVolumeSpecName "kube-api-access-8246n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.544514 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52cf9c20-bb50-4295-a308-add7b717f6ce" (UID: "52cf9c20-bb50-4295-a308-add7b717f6ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.572937 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8246n\" (UniqueName: \"kubernetes.io/projected/52cf9c20-bb50-4295-a308-add7b717f6ce-kube-api-access-8246n\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.572984 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52cf9c20-bb50-4295-a308-add7b717f6ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.619730 4836 generic.go:334] "Generic (PLEG): container finished" podID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerID="ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf" exitCode=0 Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.619799 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65gzh" event={"ID":"52cf9c20-bb50-4295-a308-add7b717f6ce","Type":"ContainerDied","Data":"ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf"} Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.619848 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65gzh" event={"ID":"52cf9c20-bb50-4295-a308-add7b717f6ce","Type":"ContainerDied","Data":"845d9e040e01c21eede646e7791a3eeafb44af87e66b2eea5e7b077448fb458c"} Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.619871 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65gzh" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.619882 4836 scope.go:117] "RemoveContainer" containerID="ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.664927 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65gzh"] Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.667210 4836 scope.go:117] "RemoveContainer" containerID="e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.685012 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-65gzh"] Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.710829 4836 scope.go:117] "RemoveContainer" containerID="589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.752550 4836 scope.go:117] "RemoveContainer" containerID="ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf" Feb 17 14:33:06 crc kubenswrapper[4836]: E0217 14:33:06.753235 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf\": container with ID starting with ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf not found: ID does not exist" containerID="ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.753322 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf"} err="failed to get container status \"ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf\": rpc error: code = NotFound desc = could not find container \"ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf\": container with ID starting with ba76b4465f5c067dce887a59bb959e096cd91077fbbe7fff1637f01594af51cf not found: ID does not exist" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.753366 4836 scope.go:117] "RemoveContainer" containerID="e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04" Feb 17 14:33:06 crc kubenswrapper[4836]: E0217 14:33:06.753733 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04\": container with ID starting with e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04 not found: ID does not exist" containerID="e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.753776 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04"} err="failed to get container status \"e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04\": rpc error: code = NotFound desc = could not find container \"e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04\": container with ID starting with e9baafd53b9f38a4a14db774f6c7ff65f7621f5b63db204c143d432c7c69cf04 not found: ID does not exist" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.753796 4836 scope.go:117] "RemoveContainer" containerID="589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2" Feb 17 14:33:06 crc kubenswrapper[4836]: E0217 14:33:06.754193 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2\": container with ID starting with 589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2 not found: ID does not exist" containerID="589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2" Feb 17 14:33:06 crc kubenswrapper[4836]: I0217 14:33:06.754244 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2"} err="failed to get container status \"589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2\": rpc error: code = NotFound desc = could not find container \"589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2\": container with ID starting with 589c03a33d996d19da55c16d13129e22ee3a80583c3c47707a14170d5f4c36a2 not found: ID does not exist" Feb 17 14:33:08 crc kubenswrapper[4836]: I0217 14:33:08.580506 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" path="/var/lib/kubelet/pods/52cf9c20-bb50-4295-a308-add7b717f6ce/volumes" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.453206 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-snsbl/must-gather-4sqf7"] Feb 17 14:33:09 crc kubenswrapper[4836]: E0217 14:33:09.453851 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerName="registry-server" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.453878 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerName="registry-server" Feb 17 14:33:09 crc kubenswrapper[4836]: E0217 14:33:09.453890 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerName="extract-utilities" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.453898 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerName="extract-utilities" Feb 17 14:33:09 crc kubenswrapper[4836]: E0217 14:33:09.453927 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerName="registry-server" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.453933 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerName="registry-server" Feb 17 14:33:09 crc kubenswrapper[4836]: E0217 14:33:09.453945 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerName="extract-content" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.453952 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerName="extract-content" Feb 17 14:33:09 crc kubenswrapper[4836]: E0217 14:33:09.453972 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerName="extract-content" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.453978 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerName="extract-content" Feb 17 14:33:09 crc kubenswrapper[4836]: E0217 14:33:09.453997 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerName="extract-utilities" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.454002 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerName="extract-utilities" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.454211 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cf9c20-bb50-4295-a308-add7b717f6ce" containerName="registry-server" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.454236 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="349d9039-4cce-4f99-83f3-f12ad111cae1" containerName="registry-server" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.455659 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.458523 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-snsbl"/"openshift-service-ca.crt" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.458759 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-snsbl"/"kube-root-ca.crt" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.494582 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-snsbl/must-gather-4sqf7"] Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.577029 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/781729f0-fe27-45e7-bd7b-23709696ec4d-must-gather-output\") pod \"must-gather-4sqf7\" (UID: \"781729f0-fe27-45e7-bd7b-23709696ec4d\") " pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.577197 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6j6w\" (UniqueName: \"kubernetes.io/projected/781729f0-fe27-45e7-bd7b-23709696ec4d-kube-api-access-t6j6w\") pod \"must-gather-4sqf7\" (UID: \"781729f0-fe27-45e7-bd7b-23709696ec4d\") " pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.678034 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/781729f0-fe27-45e7-bd7b-23709696ec4d-must-gather-output\") pod \"must-gather-4sqf7\" (UID: \"781729f0-fe27-45e7-bd7b-23709696ec4d\") " pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.678156 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6j6w\" (UniqueName: \"kubernetes.io/projected/781729f0-fe27-45e7-bd7b-23709696ec4d-kube-api-access-t6j6w\") pod \"must-gather-4sqf7\" (UID: \"781729f0-fe27-45e7-bd7b-23709696ec4d\") " pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.678516 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/781729f0-fe27-45e7-bd7b-23709696ec4d-must-gather-output\") pod \"must-gather-4sqf7\" (UID: \"781729f0-fe27-45e7-bd7b-23709696ec4d\") " pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.708866 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6j6w\" (UniqueName: \"kubernetes.io/projected/781729f0-fe27-45e7-bd7b-23709696ec4d-kube-api-access-t6j6w\") pod \"must-gather-4sqf7\" (UID: \"781729f0-fe27-45e7-bd7b-23709696ec4d\") " pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:33:09 crc kubenswrapper[4836]: I0217 14:33:09.780620 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:33:10 crc kubenswrapper[4836]: I0217 14:33:10.301086 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-snsbl/must-gather-4sqf7"] Feb 17 14:33:10 crc kubenswrapper[4836]: I0217 14:33:10.679641 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/must-gather-4sqf7" event={"ID":"781729f0-fe27-45e7-bd7b-23709696ec4d","Type":"ContainerStarted","Data":"7228d3ea40d695c4e76f39706f5515b626db9e0c64eeee26178fe2c013e04da1"} Feb 17 14:33:20 crc kubenswrapper[4836]: I0217 14:33:20.829769 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/must-gather-4sqf7" event={"ID":"781729f0-fe27-45e7-bd7b-23709696ec4d","Type":"ContainerStarted","Data":"07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67"} Feb 17 14:33:20 crc kubenswrapper[4836]: I0217 14:33:20.830405 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/must-gather-4sqf7" event={"ID":"781729f0-fe27-45e7-bd7b-23709696ec4d","Type":"ContainerStarted","Data":"f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d"} Feb 17 14:33:20 crc kubenswrapper[4836]: I0217 14:33:20.853383 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-snsbl/must-gather-4sqf7" podStartSLOduration=2.307672306 podStartE2EDuration="11.853348555s" podCreationTimestamp="2026-02-17 14:33:09 +0000 UTC" firstStartedPulling="2026-02-17 14:33:10.317001362 +0000 UTC m=+1616.659929631" lastFinishedPulling="2026-02-17 14:33:19.862677611 +0000 UTC m=+1626.205605880" observedRunningTime="2026-02-17 14:33:20.846852181 +0000 UTC m=+1627.189780460" watchObservedRunningTime="2026-02-17 14:33:20.853348555 +0000 UTC m=+1627.196276844" Feb 17 14:33:22 crc kubenswrapper[4836]: I0217 14:33:22.758322 4836 scope.go:117] "RemoveContainer" containerID="faf1f0c01e2ba58effda0101e73091532e490c7632b908240461cde1c4eacd7e" Feb 17 14:33:22 crc kubenswrapper[4836]: I0217 14:33:22.804957 4836 scope.go:117] "RemoveContainer" containerID="d1ced8732b18e9a32bcf99bb2f034caca8afbe19ad1c6c3a49849748da69630c" Feb 17 14:33:22 crc kubenswrapper[4836]: I0217 14:33:22.876827 4836 scope.go:117] "RemoveContainer" containerID="de75bc86bd0570fcef07a3f3195cfec352721b59eef66e22b061ebca87ca6456" Feb 17 14:33:22 crc kubenswrapper[4836]: I0217 14:33:22.902226 4836 scope.go:117] "RemoveContainer" containerID="44931b40ada4bc7bee4acb5d1054d14507951ed9df360a9eb97ae5e6b0efb503" Feb 17 14:33:22 crc kubenswrapper[4836]: I0217 14:33:22.925252 4836 scope.go:117] "RemoveContainer" containerID="4c54331d8c22a82e7135a4bdfa56b01c1bacccea5967146f9a8bb1c17d9ca3da" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.430665 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-snsbl/crc-debug-stlbp"] Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.432898 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.435216 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-snsbl"/"default-dockercfg-ht85k" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.594528 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc86l\" (UniqueName: \"kubernetes.io/projected/ac579e56-4f23-4b65-ad07-a1df27f67146-kube-api-access-zc86l\") pod \"crc-debug-stlbp\" (UID: \"ac579e56-4f23-4b65-ad07-a1df27f67146\") " pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.597214 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac579e56-4f23-4b65-ad07-a1df27f67146-host\") pod \"crc-debug-stlbp\" (UID: \"ac579e56-4f23-4b65-ad07-a1df27f67146\") " pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.698993 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac579e56-4f23-4b65-ad07-a1df27f67146-host\") pod \"crc-debug-stlbp\" (UID: \"ac579e56-4f23-4b65-ad07-a1df27f67146\") " pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.699157 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac579e56-4f23-4b65-ad07-a1df27f67146-host\") pod \"crc-debug-stlbp\" (UID: \"ac579e56-4f23-4b65-ad07-a1df27f67146\") " pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.699167 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc86l\" (UniqueName: \"kubernetes.io/projected/ac579e56-4f23-4b65-ad07-a1df27f67146-kube-api-access-zc86l\") pod \"crc-debug-stlbp\" (UID: \"ac579e56-4f23-4b65-ad07-a1df27f67146\") " pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.720231 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc86l\" (UniqueName: \"kubernetes.io/projected/ac579e56-4f23-4b65-ad07-a1df27f67146-kube-api-access-zc86l\") pod \"crc-debug-stlbp\" (UID: \"ac579e56-4f23-4b65-ad07-a1df27f67146\") " pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.750796 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:25 crc kubenswrapper[4836]: I0217 14:33:25.887679 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/crc-debug-stlbp" event={"ID":"ac579e56-4f23-4b65-ad07-a1df27f67146","Type":"ContainerStarted","Data":"6b9dafc80a9454cecd841ba44f7dffc8dfd96b47bb5dbc19c92e88113d30344a"} Feb 17 14:33:29 crc kubenswrapper[4836]: I0217 14:33:29.765462 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:33:29 crc kubenswrapper[4836]: I0217 14:33:29.766133 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:33:40 crc kubenswrapper[4836]: I0217 14:33:40.086725 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/crc-debug-stlbp" event={"ID":"ac579e56-4f23-4b65-ad07-a1df27f67146","Type":"ContainerStarted","Data":"0b1fdb782cc59c87b5c334a8e29bc01c7def7137ff5e1a24115754176ed4d2ab"} Feb 17 14:33:40 crc kubenswrapper[4836]: I0217 14:33:40.116322 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-snsbl/crc-debug-stlbp" podStartSLOduration=1.47615767 podStartE2EDuration="15.116255679s" podCreationTimestamp="2026-02-17 14:33:25 +0000 UTC" firstStartedPulling="2026-02-17 14:33:25.823966619 +0000 UTC m=+1632.166894888" lastFinishedPulling="2026-02-17 14:33:39.464064628 +0000 UTC m=+1645.806992897" observedRunningTime="2026-02-17 14:33:40.103344682 +0000 UTC m=+1646.446272961" watchObservedRunningTime="2026-02-17 14:33:40.116255679 +0000 UTC m=+1646.459183958" Feb 17 14:33:56 crc kubenswrapper[4836]: I0217 14:33:56.443388 4836 generic.go:334] "Generic (PLEG): container finished" podID="ac579e56-4f23-4b65-ad07-a1df27f67146" containerID="0b1fdb782cc59c87b5c334a8e29bc01c7def7137ff5e1a24115754176ed4d2ab" exitCode=0 Feb 17 14:33:56 crc kubenswrapper[4836]: I0217 14:33:56.443473 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/crc-debug-stlbp" event={"ID":"ac579e56-4f23-4b65-ad07-a1df27f67146","Type":"ContainerDied","Data":"0b1fdb782cc59c87b5c334a8e29bc01c7def7137ff5e1a24115754176ed4d2ab"} Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.588418 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.602335 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac579e56-4f23-4b65-ad07-a1df27f67146-host\") pod \"ac579e56-4f23-4b65-ad07-a1df27f67146\" (UID: \"ac579e56-4f23-4b65-ad07-a1df27f67146\") " Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.602460 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac579e56-4f23-4b65-ad07-a1df27f67146-host" (OuterVolumeSpecName: "host") pod "ac579e56-4f23-4b65-ad07-a1df27f67146" (UID: "ac579e56-4f23-4b65-ad07-a1df27f67146"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.602759 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc86l\" (UniqueName: \"kubernetes.io/projected/ac579e56-4f23-4b65-ad07-a1df27f67146-kube-api-access-zc86l\") pod \"ac579e56-4f23-4b65-ad07-a1df27f67146\" (UID: \"ac579e56-4f23-4b65-ad07-a1df27f67146\") " Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.605212 4836 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac579e56-4f23-4b65-ad07-a1df27f67146-host\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.609452 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac579e56-4f23-4b65-ad07-a1df27f67146-kube-api-access-zc86l" (OuterVolumeSpecName: "kube-api-access-zc86l") pod "ac579e56-4f23-4b65-ad07-a1df27f67146" (UID: "ac579e56-4f23-4b65-ad07-a1df27f67146"). InnerVolumeSpecName "kube-api-access-zc86l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.643715 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-snsbl/crc-debug-stlbp"] Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.667191 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-snsbl/crc-debug-stlbp"] Feb 17 14:33:57 crc kubenswrapper[4836]: I0217 14:33:57.706007 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc86l\" (UniqueName: \"kubernetes.io/projected/ac579e56-4f23-4b65-ad07-a1df27f67146-kube-api-access-zc86l\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.468018 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9dafc80a9454cecd841ba44f7dffc8dfd96b47bb5dbc19c92e88113d30344a" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.468089 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/crc-debug-stlbp" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.581902 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac579e56-4f23-4b65-ad07-a1df27f67146" path="/var/lib/kubelet/pods/ac579e56-4f23-4b65-ad07-a1df27f67146/volumes" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.853580 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-snsbl/crc-debug-pv9j4"] Feb 17 14:33:58 crc kubenswrapper[4836]: E0217 14:33:58.854377 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac579e56-4f23-4b65-ad07-a1df27f67146" containerName="container-00" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.854392 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac579e56-4f23-4b65-ad07-a1df27f67146" containerName="container-00" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.854621 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac579e56-4f23-4b65-ad07-a1df27f67146" containerName="container-00" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.855367 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.858075 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-snsbl"/"default-dockercfg-ht85k" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.900785 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlqh\" (UniqueName: \"kubernetes.io/projected/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-kube-api-access-7rlqh\") pod \"crc-debug-pv9j4\" (UID: \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\") " pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:33:58 crc kubenswrapper[4836]: I0217 14:33:58.901102 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-host\") pod \"crc-debug-pv9j4\" (UID: \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\") " pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.003375 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-host\") pod \"crc-debug-pv9j4\" (UID: \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\") " pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.003602 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-host\") pod \"crc-debug-pv9j4\" (UID: \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\") " pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.003641 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlqh\" (UniqueName: \"kubernetes.io/projected/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-kube-api-access-7rlqh\") pod \"crc-debug-pv9j4\" (UID: \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\") " pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.029569 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlqh\" (UniqueName: \"kubernetes.io/projected/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-kube-api-access-7rlqh\") pod \"crc-debug-pv9j4\" (UID: \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\") " pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.175519 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.481088 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/crc-debug-pv9j4" event={"ID":"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf","Type":"ContainerStarted","Data":"52187a78be9c156d0265b94c5488b994ed0bab26684ef18004535fce0431373c"} Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.764979 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.765101 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.765188 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.766448 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:33:59 crc kubenswrapper[4836]: I0217 14:33:59.766543 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" gracePeriod=600 Feb 17 14:33:59 crc kubenswrapper[4836]: E0217 14:33:59.897409 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:34:00 crc kubenswrapper[4836]: I0217 14:34:00.500971 4836 generic.go:334] "Generic (PLEG): container finished" podID="3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf" containerID="b099deccdc43aaaf5e1d9673615b93cdbff588beb42726f387dc2c0ef267fb73" exitCode=1 Feb 17 14:34:00 crc kubenswrapper[4836]: I0217 14:34:00.501080 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/crc-debug-pv9j4" event={"ID":"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf","Type":"ContainerDied","Data":"b099deccdc43aaaf5e1d9673615b93cdbff588beb42726f387dc2c0ef267fb73"} Feb 17 14:34:00 crc kubenswrapper[4836]: I0217 14:34:00.520085 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" exitCode=0 Feb 17 14:34:00 crc kubenswrapper[4836]: I0217 14:34:00.520158 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20"} Feb 17 14:34:00 crc kubenswrapper[4836]: I0217 14:34:00.520206 4836 scope.go:117] "RemoveContainer" containerID="3c09fe81ffce38e5d9ef4195d8e69df0edfb238c5a8b73cb36be460e79dea4bb" Feb 17 14:34:00 crc kubenswrapper[4836]: I0217 14:34:00.521688 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:34:00 crc kubenswrapper[4836]: E0217 14:34:00.522126 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:34:00 crc kubenswrapper[4836]: I0217 14:34:00.616253 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-snsbl/crc-debug-pv9j4"] Feb 17 14:34:00 crc kubenswrapper[4836]: I0217 14:34:00.621971 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-snsbl/crc-debug-pv9j4"] Feb 17 14:34:01 crc kubenswrapper[4836]: I0217 14:34:01.641427 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:34:01 crc kubenswrapper[4836]: I0217 14:34:01.834328 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlqh\" (UniqueName: \"kubernetes.io/projected/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-kube-api-access-7rlqh\") pod \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\" (UID: \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\") " Feb 17 14:34:01 crc kubenswrapper[4836]: I0217 14:34:01.834395 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-host\") pod \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\" (UID: \"3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf\") " Feb 17 14:34:01 crc kubenswrapper[4836]: I0217 14:34:01.835091 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-host" (OuterVolumeSpecName: "host") pod "3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf" (UID: "3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:34:01 crc kubenswrapper[4836]: I0217 14:34:01.842884 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-kube-api-access-7rlqh" (OuterVolumeSpecName: "kube-api-access-7rlqh") pod "3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf" (UID: "3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf"). InnerVolumeSpecName "kube-api-access-7rlqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:34:01 crc kubenswrapper[4836]: I0217 14:34:01.937745 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlqh\" (UniqueName: \"kubernetes.io/projected/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-kube-api-access-7rlqh\") on node \"crc\" DevicePath \"\"" Feb 17 14:34:01 crc kubenswrapper[4836]: I0217 14:34:01.937788 4836 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf-host\") on node \"crc\" DevicePath \"\"" Feb 17 14:34:02 crc kubenswrapper[4836]: I0217 14:34:02.546137 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52187a78be9c156d0265b94c5488b994ed0bab26684ef18004535fce0431373c" Feb 17 14:34:02 crc kubenswrapper[4836]: I0217 14:34:02.546521 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/crc-debug-pv9j4" Feb 17 14:34:02 crc kubenswrapper[4836]: I0217 14:34:02.584252 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf" path="/var/lib/kubelet/pods/3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf/volumes" Feb 17 14:34:14 crc kubenswrapper[4836]: I0217 14:34:14.578833 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:34:14 crc kubenswrapper[4836]: E0217 14:34:14.579899 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:34:23 crc kubenswrapper[4836]: I0217 14:34:23.147103 4836 scope.go:117] "RemoveContainer" containerID="85bf6d2c05b11776e36fd7dffb8368edf8f8e5b125a942780ac6175dd831a159" Feb 17 14:34:23 crc kubenswrapper[4836]: I0217 14:34:23.190873 4836 scope.go:117] "RemoveContainer" containerID="14423eb209623d815ed52e92ff6318e5e659fcf35e927a649dbd595f58224937" Feb 17 14:34:29 crc kubenswrapper[4836]: I0217 14:34:29.568421 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:34:29 crc kubenswrapper[4836]: E0217 14:34:29.570533 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:34:40 crc kubenswrapper[4836]: I0217 14:34:40.568665 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:34:40 crc kubenswrapper[4836]: E0217 14:34:40.570030 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:34:54 crc kubenswrapper[4836]: I0217 14:34:54.579178 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:34:54 crc kubenswrapper[4836]: E0217 14:34:54.580319 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:35:07 crc kubenswrapper[4836]: I0217 14:35:07.569155 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:35:07 crc kubenswrapper[4836]: E0217 14:35:07.570539 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:35:08 crc kubenswrapper[4836]: I0217 14:35:08.071587 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_039a526c-4f5a-4641-9340-b18459145569/init-config-reloader/0.log" Feb 17 14:35:08 crc kubenswrapper[4836]: I0217 14:35:08.769270 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_039a526c-4f5a-4641-9340-b18459145569/init-config-reloader/0.log" Feb 17 14:35:08 crc kubenswrapper[4836]: I0217 14:35:08.861390 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_039a526c-4f5a-4641-9340-b18459145569/alertmanager/0.log" Feb 17 14:35:08 crc kubenswrapper[4836]: I0217 14:35:08.906146 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_039a526c-4f5a-4641-9340-b18459145569/config-reloader/0.log" Feb 17 14:35:09 crc kubenswrapper[4836]: I0217 14:35:09.342961 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-652c-account-create-update-lswdv_767841a7-db94-430a-b408-10e5bd0350e5/mariadb-account-create-update/0.log" Feb 17 14:35:09 crc kubenswrapper[4836]: I0217 14:35:09.356953 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7dc9c9fdbb-zxjj6_62b902ba-6ba2-48f3-a6dc-652fd1d6d58c/barbican-api/0.log" Feb 17 14:35:09 crc kubenswrapper[4836]: I0217 14:35:09.452241 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7dc9c9fdbb-zxjj6_62b902ba-6ba2-48f3-a6dc-652fd1d6d58c/barbican-api-log/0.log" Feb 17 14:35:09 crc kubenswrapper[4836]: I0217 14:35:09.673196 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-create-69hk6_4edeb89f-0bd9-466e-a9f9-2d45575d2c72/mariadb-database-create/0.log" Feb 17 14:35:09 crc kubenswrapper[4836]: I0217 14:35:09.713455 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-sync-g9l4s_18361bc2-5db1-4611-be18-38593e0b5d5d/barbican-db-sync/0.log" Feb 17 14:35:09 crc kubenswrapper[4836]: I0217 14:35:09.902863 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-68fd77ffbb-m5r5c_f79d706e-2d22-49c6-acb5-dc3f130ab102/barbican-keystone-listener/0.log" Feb 17 14:35:09 crc kubenswrapper[4836]: I0217 14:35:09.994104 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-68fd77ffbb-m5r5c_f79d706e-2d22-49c6-acb5-dc3f130ab102/barbican-keystone-listener-log/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.144566 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6567fb9c77-xcq7p_bf33e52a-365f-4ccc-8352-f4c7f8e2aebd/barbican-worker/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.285402 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6567fb9c77-xcq7p_bf33e52a-365f-4ccc-8352-f4c7f8e2aebd/barbican-worker-log/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.330798 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ddcf30e-7916-4b59-8986-a5d2c218170e/ceilometer-central-agent/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.423428 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ddcf30e-7916-4b59-8986-a5d2c218170e/ceilometer-notification-agent/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.525047 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ddcf30e-7916-4b59-8986-a5d2c218170e/sg-core/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.527974 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ddcf30e-7916-4b59-8986-a5d2c218170e/proxy-httpd/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.653765 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-0d11-account-create-update-jf72z_f9ee15e8-6695-454f-83ad-d54176458497/mariadb-account-create-update/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.883121 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8722776f-950d-46d6-8929-164cc70747af/cinder-api/0.log" Feb 17 14:35:10 crc kubenswrapper[4836]: I0217 14:35:10.889117 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8722776f-950d-46d6-8929-164cc70747af/cinder-api-log/0.log" Feb 17 14:35:11 crc kubenswrapper[4836]: I0217 14:35:11.084071 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-create-w5qdk_eb354e85-311d-40bb-ae4a-5c535d4d89b9/mariadb-database-create/0.log" Feb 17 14:35:11 crc kubenswrapper[4836]: I0217 14:35:11.109989 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-sync-qqwhc_8185c649-f1ad-4230-830d-07d002e5b358/cinder-db-sync/0.log" Feb 17 14:35:12 crc kubenswrapper[4836]: I0217 14:35:12.088286 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0e6a7955-6cfb-4afe-b94a-8900513e5821/probe/0.log" Feb 17 14:35:12 crc kubenswrapper[4836]: I0217 14:35:12.210445 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-2ea0-account-create-update-p7p99_2ee1a0f2-86df-4f97-957a-22bbd7da4505/mariadb-account-create-update/0.log" Feb 17 14:35:12 crc kubenswrapper[4836]: I0217 14:35:12.266754 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0e6a7955-6cfb-4afe-b94a-8900513e5821/cinder-scheduler/0.log" Feb 17 14:35:12 crc kubenswrapper[4836]: I0217 14:35:12.727763 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49/cloudkitty-api/0.log" Feb 17 14:35:12 crc kubenswrapper[4836]: I0217 14:35:12.833614 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_260ff3a2-31e2-4ef9-9d7a-ef52ddb0fc49/cloudkitty-api-log/0.log" Feb 17 14:35:12 crc kubenswrapper[4836]: I0217 14:35:12.885054 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-db-create-jjrp2_a1fe36f3-d6b6-44e0-b85b-6def754fd08e/mariadb-database-create/0.log" Feb 17 14:35:13 crc kubenswrapper[4836]: I0217 14:35:13.068795 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-db-sync-pvljf_4e016162-2025-44ad-989d-ce71d9f8f9bf/cloudkitty-db-sync/0.log" Feb 17 14:35:13 crc kubenswrapper[4836]: I0217 14:35:13.120102 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_e2c3e649-7933-49e2-800c-b66dbd377ac6/loki-compactor/0.log" Feb 17 14:35:13 crc kubenswrapper[4836]: I0217 14:35:13.347679 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-r4gdh_33c54f8c-91c4-4742-b545-d0e2c4e85fe2/loki-distributor/0.log" Feb 17 14:35:13 crc kubenswrapper[4836]: I0217 14:35:13.448620 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-nbvnf_a977b831-7959-4509-93bf-a45b375ca722/gateway/0.log" Feb 17 14:35:13 crc kubenswrapper[4836]: I0217 14:35:13.597243 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-q78z5_974f66b3-690f-4008-949d-1d57c978d427/gateway/0.log" Feb 17 14:35:13 crc kubenswrapper[4836]: I0217 14:35:13.731396 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_d370240e-d6c1-4d9c-9877-293afa6e77f2/loki-index-gateway/0.log" Feb 17 14:35:13 crc kubenswrapper[4836]: I0217 14:35:13.910779 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_1c33fb01-9bf7-43f1-86d5-004e70d3721c/loki-ingester/0.log" Feb 17 14:35:14 crc kubenswrapper[4836]: I0217 14:35:14.049610 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-fsq2h_27c5f450-8bef-4732-a7fb-272d9b5a4ea8/loki-querier/0.log" Feb 17 14:35:14 crc kubenswrapper[4836]: I0217 14:35:14.245329 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-bqn6j_487d19a3-7f23-4945-bfe1-6231a37a84c6/loki-query-frontend/0.log" Feb 17 14:35:14 crc kubenswrapper[4836]: I0217 14:35:14.449395 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-storageinit-9z4jp_f38b5f94-bc8b-4e64-abe6-8c39b920cb4b/cloudkitty-storageinit/0.log" Feb 17 14:35:15 crc kubenswrapper[4836]: I0217 14:35:15.085377 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fd9b586ff-snjhj_6dc084a0-be89-4371-92a3-181cfe1979ce/init/0.log" Feb 17 14:35:15 crc kubenswrapper[4836]: I0217 14:35:15.462323 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fd9b586ff-snjhj_6dc084a0-be89-4371-92a3-181cfe1979ce/dnsmasq-dns/0.log" Feb 17 14:35:15 crc kubenswrapper[4836]: I0217 14:35:15.589828 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fd9b586ff-snjhj_6dc084a0-be89-4371-92a3-181cfe1979ce/init/0.log" Feb 17 14:35:15 crc kubenswrapper[4836]: I0217 14:35:15.604197 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-d162-account-create-update-khb5j_1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5/mariadb-account-create-update/0.log" Feb 17 14:35:16 crc kubenswrapper[4836]: I0217 14:35:16.134848 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-z8g7x_df3a6cf1-bca0-45b2-9f7c-6d483452d49d/glance-db-sync/0.log" Feb 17 14:35:16 crc kubenswrapper[4836]: I0217 14:35:16.166580 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-pn587_77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b/mariadb-database-create/0.log" Feb 17 14:35:16 crc kubenswrapper[4836]: I0217 14:35:16.434560 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b5121f0d-e93f-44c6-96b5-4ed7b6ec960e/glance-log/0.log" Feb 17 14:35:16 crc kubenswrapper[4836]: I0217 14:35:16.527564 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b5121f0d-e93f-44c6-96b5-4ed7b6ec960e/glance-httpd/0.log" Feb 17 14:35:16 crc kubenswrapper[4836]: I0217 14:35:16.777825 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_172fadf8-99d3-436a-b711-010e8ffe289b/glance-httpd/0.log" Feb 17 14:35:16 crc kubenswrapper[4836]: I0217 14:35:16.873396 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_172fadf8-99d3-436a-b711-010e8ffe289b/glance-log/0.log" Feb 17 14:35:17 crc kubenswrapper[4836]: I0217 14:35:17.127989 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_79c00bb2-9487-433a-be90-07b6d885e4d0/cloudkitty-proc/0.log" Feb 17 14:35:17 crc kubenswrapper[4836]: I0217 14:35:17.244326 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-vmgps_10331926-261d-4e44-a8c2-89846903ca12/keystone-bootstrap/0.log" Feb 17 14:35:17 crc kubenswrapper[4836]: I0217 14:35:17.306224 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-78c4d587b5-cqhdl_f2f9acba-3f54-43b6-9461-31cba0cc954b/keystone-api/0.log" Feb 17 14:35:17 crc kubenswrapper[4836]: I0217 14:35:17.566191 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d8f3-account-create-update-kmlvm_2ae1659d-7892-4744-a570-4ba7c65e4caf/mariadb-account-create-update/0.log" Feb 17 14:35:17 crc kubenswrapper[4836]: I0217 14:35:17.572927 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-k7zc9_e562d506-21d2-4edd-90b8-97bd11bf068e/mariadb-database-create/0.log" Feb 17 14:35:17 crc kubenswrapper[4836]: I0217 14:35:17.635287 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-q25rr_6a1d4ef8-03d9-42d8-ae0b-9410767ed25f/keystone-db-sync/0.log" Feb 17 14:35:17 crc kubenswrapper[4836]: I0217 14:35:17.935934 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8809e181-9f70-4810-97e8-6fc4c9e3561a/kube-state-metrics/0.log" Feb 17 14:35:18 crc kubenswrapper[4836]: I0217 14:35:18.829813 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-14cb-account-create-update-xw2dd_623225aa-2492-494e-be5b-92acef6f23cf/mariadb-account-create-update/0.log" Feb 17 14:35:18 crc kubenswrapper[4836]: I0217 14:35:18.870878 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fc4994bf7-cqhhj_88848d0f-5d90-4ca0-9a78-d08e73159601/neutron-api/0.log" Feb 17 14:35:18 crc kubenswrapper[4836]: I0217 14:35:18.939089 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fc4994bf7-cqhhj_88848d0f-5d90-4ca0-9a78-d08e73159601/neutron-httpd/0.log" Feb 17 14:35:19 crc kubenswrapper[4836]: I0217 14:35:19.143912 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-nwjd8_d4ce1c7a-57e8-491e-84ab-8aed8baea37b/mariadb-database-create/0.log" Feb 17 14:35:19 crc kubenswrapper[4836]: I0217 14:35:19.303685 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-sb6h7_81ddbaec-f370-44a3-802b-26980ea65d2f/neutron-db-sync/0.log" Feb 17 14:35:19 crc kubenswrapper[4836]: I0217 14:35:19.807250 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a8815111-fe36-4868-b092-2f88255f8f2b/nova-api-api/0.log" Feb 17 14:35:19 crc kubenswrapper[4836]: I0217 14:35:19.811320 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a8815111-fe36-4868-b092-2f88255f8f2b/nova-api-log/0.log" Feb 17 14:35:20 crc kubenswrapper[4836]: I0217 14:35:20.150578 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-a7c4-account-create-update-qj5lb_c7d61f8c-4804-49b6-937e-fbaf20aa3ed2/mariadb-account-create-update/0.log" Feb 17 14:35:20 crc kubenswrapper[4836]: I0217 14:35:20.262733 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-db-create-q8wrd_88b1aa3a-dc15-4ec1-ba76-8246e300422f/mariadb-database-create/0.log" Feb 17 14:35:20 crc kubenswrapper[4836]: I0217 14:35:20.482493 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-8fba-account-create-update-gqd5n_0b8171da-ad25-4388-9dab-2afc19993d97/mariadb-account-create-update/0.log" Feb 17 14:35:20 crc kubenswrapper[4836]: I0217 14:35:20.550280 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-cell-mapping-lqvvn_3f9d6a93-3d3a-4c5c-85cf-329209cfe911/nova-manage/0.log" Feb 17 14:35:20 crc kubenswrapper[4836]: I0217 14:35:20.823502 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_00cffdcb-70af-415e-86a8-4f8eb7c0ba6f/nova-cell0-conductor-conductor/0.log" Feb 17 14:35:20 crc kubenswrapper[4836]: I0217 14:35:20.901029 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-db-sync-896gw_5284ac65-3629-4b0f-94ce-114964fe6d15/nova-cell0-conductor-db-sync/0.log" Feb 17 14:35:21 crc kubenswrapper[4836]: I0217 14:35:21.145065 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-db-create-npl52_db342a3d-55f5-4b0c-b96f-327014b6fb82/mariadb-database-create/0.log" Feb 17 14:35:21 crc kubenswrapper[4836]: I0217 14:35:21.653172 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-28f5-account-create-update-74tvm_4dc00367-2940-413d-872a-74d4fa37fc1f/mariadb-account-create-update/0.log" Feb 17 14:35:21 crc kubenswrapper[4836]: I0217 14:35:21.910401 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-cell-mapping-h4mlr_079f20c9-f742-4c4b-a8c0-a2a09573bf62/nova-manage/0.log" Feb 17 14:35:22 crc kubenswrapper[4836]: I0217 14:35:22.175839 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-db-sync-bz94v_790a788c-3cfe-49c8-b1ff-a83bcedf17e0/nova-cell1-conductor-db-sync/0.log" Feb 17 14:35:22 crc kubenswrapper[4836]: I0217 14:35:22.190846 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ed905f2c-85b9-4684-a376-674caf693eca/nova-cell1-conductor-conductor/0.log" Feb 17 14:35:22 crc kubenswrapper[4836]: I0217 14:35:22.465534 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-db-create-5h5m9_0312359b-98a6-49c7-83f1-fb44c679e8aa/mariadb-database-create/0.log" Feb 17 14:35:22 crc kubenswrapper[4836]: I0217 14:35:22.569043 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:35:22 crc kubenswrapper[4836]: E0217 14:35:22.569857 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:35:22 crc kubenswrapper[4836]: I0217 14:35:22.661242 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6d9c8dd5-2ccb-4656-a059-352c03aa923d/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 14:35:22 crc kubenswrapper[4836]: I0217 14:35:22.867851 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c56150e0-07ff-4a45-9231-26fa261942c4/nova-metadata-metadata/0.log" Feb 17 14:35:22 crc kubenswrapper[4836]: I0217 14:35:22.921108 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c56150e0-07ff-4a45-9231-26fa261942c4/nova-metadata-log/0.log" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.123991 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6bfcfdb5-3886-47e2-8e71-33c95dc14e73/nova-scheduler-scheduler/0.log" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.341257 4836 scope.go:117] "RemoveContainer" containerID="1fc9116efed5aa1cde1e1851a8feece763300523cbdc4d6253a5c08f4f4f9f36" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.473521 4836 scope.go:117] "RemoveContainer" containerID="9ee60ada822c522c9249d0e3c31f511d939804abdb610bce124e951b7000a09d" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.546807 4836 scope.go:117] "RemoveContainer" containerID="b7a5e210ee7a505ae087f3c56329942b71db962383e4ae1693812dd8340169c8" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.585346 4836 scope.go:117] "RemoveContainer" containerID="407f5678203e5e174c01300835b55b61252a1ab248014426970911ab531d756b" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.599157 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a6016745-1634-4eb6-afee-b98ce9ab8f56/mysql-bootstrap/0.log" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.634362 4836 scope.go:117] "RemoveContainer" containerID="d2098b2a7c4dcbee4fa27ea9bfa1c19e32c5f83e96aa663b877abb8284852c74" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.853263 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a6016745-1634-4eb6-afee-b98ce9ab8f56/mysql-bootstrap/0.log" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.911722 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a6016745-1634-4eb6-afee-b98ce9ab8f56/galera/0.log" Feb 17 14:35:23 crc kubenswrapper[4836]: I0217 14:35:23.977110 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2fd891e0-6f97-4fa3-8281-aa97232d6c6d/mysql-bootstrap/0.log" Feb 17 14:35:24 crc kubenswrapper[4836]: I0217 14:35:24.320583 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2fd891e0-6f97-4fa3-8281-aa97232d6c6d/mysql-bootstrap/0.log" Feb 17 14:35:24 crc kubenswrapper[4836]: I0217 14:35:24.377157 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4fe674a8-c32b-412e-8d20-2a6e7e18bb10/openstackclient/0.log" Feb 17 14:35:24 crc kubenswrapper[4836]: I0217 14:35:24.411370 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2fd891e0-6f97-4fa3-8281-aa97232d6c6d/galera/0.log" Feb 17 14:35:25 crc kubenswrapper[4836]: I0217 14:35:25.492410 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ghk5k_5949d44f-ef6d-417e-9035-9b235cd59863/ovn-controller/0.log" Feb 17 14:35:25 crc kubenswrapper[4836]: I0217 14:35:25.512048 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6s7lx_bf32834e-7ae4-4e3b-b532-dd87f6a9223e/openstack-network-exporter/0.log" Feb 17 14:35:25 crc kubenswrapper[4836]: I0217 14:35:25.724696 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j4jj9_cefe420d-f25c-4681-9ae8-b61f0a354282/ovsdb-server-init/0.log" Feb 17 14:35:26 crc kubenswrapper[4836]: I0217 14:35:26.103166 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j4jj9_cefe420d-f25c-4681-9ae8-b61f0a354282/ovsdb-server/0.log" Feb 17 14:35:26 crc kubenswrapper[4836]: I0217 14:35:26.128357 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j4jj9_cefe420d-f25c-4681-9ae8-b61f0a354282/ovsdb-server-init/0.log" Feb 17 14:35:26 crc kubenswrapper[4836]: I0217 14:35:26.147192 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j4jj9_cefe420d-f25c-4681-9ae8-b61f0a354282/ovs-vswitchd/0.log" Feb 17 14:35:26 crc kubenswrapper[4836]: I0217 14:35:26.712682 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0f031114-b776-4180-ab6e-eb5868f34d3e/openstack-network-exporter/0.log" Feb 17 14:35:26 crc kubenswrapper[4836]: I0217 14:35:26.821377 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0f031114-b776-4180-ab6e-eb5868f34d3e/ovn-northd/0.log" Feb 17 14:35:26 crc kubenswrapper[4836]: I0217 14:35:26.869050 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55bc1962-7790-448a-838c-cb13a870ea23/openstack-network-exporter/0.log" Feb 17 14:35:27 crc kubenswrapper[4836]: I0217 14:35:27.029837 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55bc1962-7790-448a-838c-cb13a870ea23/ovsdbserver-nb/0.log" Feb 17 14:35:27 crc kubenswrapper[4836]: I0217 14:35:27.226435 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_348d02a8-d1b2-4bd3-9f4c-9153e24a5f19/openstack-network-exporter/0.log" Feb 17 14:35:27 crc kubenswrapper[4836]: I0217 14:35:27.243431 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_348d02a8-d1b2-4bd3-9f4c-9153e24a5f19/ovsdbserver-sb/0.log" Feb 17 14:35:27 crc kubenswrapper[4836]: I0217 14:35:27.497093 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-83de-account-create-update-fh75b_54905e17-d443-4465-8f70-7be04a89086f/mariadb-account-create-update/0.log" Feb 17 14:35:27 crc kubenswrapper[4836]: I0217 14:35:27.609862 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc958ddf6-kh2rq_42c3b1e3-728a-4bd8-9669-bfe1656b6de2/placement-api/0.log" Feb 17 14:35:28 crc kubenswrapper[4836]: I0217 14:35:28.104949 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc958ddf6-kh2rq_42c3b1e3-728a-4bd8-9669-bfe1656b6de2/placement-log/0.log" Feb 17 14:35:28 crc kubenswrapper[4836]: I0217 14:35:28.382673 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-hx7tv_add50d48-0a1c-4d2f-bcc3-ae9355e95c3b/mariadb-database-create/0.log" Feb 17 14:35:28 crc kubenswrapper[4836]: I0217 14:35:28.407034 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-pdhxs_1fe4b42c-afbf-41e1-8035-5fffb156eadc/placement-db-sync/0.log" Feb 17 14:35:28 crc kubenswrapper[4836]: I0217 14:35:28.668062 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6fec8667-7189-4e29-8362-37dd935d2db7/init-config-reloader/0.log" Feb 17 14:35:28 crc kubenswrapper[4836]: I0217 14:35:28.879926 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6fec8667-7189-4e29-8362-37dd935d2db7/config-reloader/0.log" Feb 17 14:35:28 crc kubenswrapper[4836]: I0217 14:35:28.889414 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6fec8667-7189-4e29-8362-37dd935d2db7/init-config-reloader/0.log" Feb 17 14:35:28 crc kubenswrapper[4836]: I0217 14:35:28.915929 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6fec8667-7189-4e29-8362-37dd935d2db7/prometheus/0.log" Feb 17 14:35:28 crc kubenswrapper[4836]: I0217 14:35:28.932871 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6fec8667-7189-4e29-8362-37dd935d2db7/thanos-sidecar/0.log" Feb 17 14:35:29 crc kubenswrapper[4836]: I0217 14:35:29.166003 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6f866bb7-5209-4275-8884-df6f074b3f7c/setup-container/0.log" Feb 17 14:35:29 crc kubenswrapper[4836]: I0217 14:35:29.692607 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6f866bb7-5209-4275-8884-df6f074b3f7c/setup-container/0.log" Feb 17 14:35:29 crc kubenswrapper[4836]: I0217 14:35:29.825731 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6f866bb7-5209-4275-8884-df6f074b3f7c/rabbitmq/0.log" Feb 17 14:35:29 crc kubenswrapper[4836]: I0217 14:35:29.831782 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec9408e6-0474-4f84-842e-b1c20f42a7b8/setup-container/0.log" Feb 17 14:35:30 crc kubenswrapper[4836]: I0217 14:35:30.180226 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec9408e6-0474-4f84-842e-b1c20f42a7b8/setup-container/0.log" Feb 17 14:35:30 crc kubenswrapper[4836]: I0217 14:35:30.194893 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_root-account-create-update-h9gmq_caa6524b-2b3f-47c3-b55f-1435685df59d/mariadb-account-create-update/0.log" Feb 17 14:35:30 crc kubenswrapper[4836]: I0217 14:35:30.232579 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec9408e6-0474-4f84-842e-b1c20f42a7b8/rabbitmq/0.log" Feb 17 14:35:30 crc kubenswrapper[4836]: I0217 14:35:30.499414 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d87f46c5f-vfn9f_a17ffb1e-09d2-4524-8c33-e50e15b9031d/proxy-httpd/0.log" Feb 17 14:35:30 crc kubenswrapper[4836]: I0217 14:35:30.584381 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d87f46c5f-vfn9f_a17ffb1e-09d2-4524-8c33-e50e15b9031d/proxy-server/0.log" Feb 17 14:35:30 crc kubenswrapper[4836]: I0217 14:35:30.799366 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dbzmx_cb33695b-c451-44b2-8a2a-fe534a4040e3/swift-ring-rebalance/0.log" Feb 17 14:35:30 crc kubenswrapper[4836]: I0217 14:35:30.931009 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/account-auditor/0.log" Feb 17 14:35:31 crc kubenswrapper[4836]: I0217 14:35:31.070930 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/account-reaper/0.log" Feb 17 14:35:31 crc kubenswrapper[4836]: I0217 14:35:31.775285 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/account-replicator/0.log" Feb 17 14:35:31 crc kubenswrapper[4836]: I0217 14:35:31.821871 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/account-server/0.log" Feb 17 14:35:31 crc kubenswrapper[4836]: I0217 14:35:31.836225 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/container-auditor/0.log" Feb 17 14:35:31 crc kubenswrapper[4836]: I0217 14:35:31.966623 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ce3babe4-6d77-45ce-b9cc-626678d3ec64/memcached/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.023197 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/container-replicator/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.052067 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/container-server/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.151484 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/container-updater/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.201324 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/object-auditor/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.315436 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/object-expirer/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.393036 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/object-server/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.430871 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/object-updater/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.433011 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/object-replicator/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.484338 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/rsync/0.log" Feb 17 14:35:32 crc kubenswrapper[4836]: I0217 14:35:32.820693 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e482046c-502a-4f41-b013-7b3ef1c71ee1/swift-recon-cron/0.log" Feb 17 14:35:35 crc kubenswrapper[4836]: I0217 14:35:35.097485 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:35:35 crc kubenswrapper[4836]: E0217 14:35:35.098231 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:35:47 crc kubenswrapper[4836]: I0217 14:35:47.568991 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:35:47 crc kubenswrapper[4836]: E0217 14:35:47.570381 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:36:02 crc kubenswrapper[4836]: I0217 14:36:02.568888 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:36:02 crc kubenswrapper[4836]: E0217 14:36:02.569686 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:36:11 crc kubenswrapper[4836]: I0217 14:36:11.517612 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm_dc1ca64e-8914-44ae-8d9e-d7c63ba6e166/util/0.log" Feb 17 14:36:11 crc kubenswrapper[4836]: I0217 14:36:11.737992 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm_dc1ca64e-8914-44ae-8d9e-d7c63ba6e166/util/0.log" Feb 17 14:36:11 crc kubenswrapper[4836]: I0217 14:36:11.754161 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm_dc1ca64e-8914-44ae-8d9e-d7c63ba6e166/pull/0.log" Feb 17 14:36:11 crc kubenswrapper[4836]: I0217 14:36:11.840371 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm_dc1ca64e-8914-44ae-8d9e-d7c63ba6e166/pull/0.log" Feb 17 14:36:12 crc kubenswrapper[4836]: I0217 14:36:12.052839 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm_dc1ca64e-8914-44ae-8d9e-d7c63ba6e166/extract/0.log" Feb 17 14:36:12 crc kubenswrapper[4836]: I0217 14:36:12.058577 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm_dc1ca64e-8914-44ae-8d9e-d7c63ba6e166/pull/0.log" Feb 17 14:36:12 crc kubenswrapper[4836]: I0217 14:36:12.100473 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_609d143c5a73782b709463f2a5b0d811d0c25a93f651a8c9a58ebcae612cvgm_dc1ca64e-8914-44ae-8d9e-d7c63ba6e166/util/0.log" Feb 17 14:36:12 crc kubenswrapper[4836]: I0217 14:36:12.969061 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-8wdwr_0962ca43-43c4-4884-bd8e-889835f83632/manager/0.log" Feb 17 14:36:13 crc kubenswrapper[4836]: I0217 14:36:13.295447 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-zxb25_ce77a6a5-95bb-4758-8a38-cdc354fd9d6c/manager/0.log" Feb 17 14:36:13 crc kubenswrapper[4836]: I0217 14:36:13.466954 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-7vwdd_c3d9def3-7f53-4acc-9c46-d37ddf41e3b7/manager/0.log" Feb 17 14:36:13 crc kubenswrapper[4836]: I0217 14:36:13.774226 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-bv7s8_f2e6ac9f-ee72-4a28-b298-9b2f918d0c95/manager/0.log" Feb 17 14:36:14 crc kubenswrapper[4836]: I0217 14:36:14.420371 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-b6cfm_12cff299-e5ea-40a9-8a69-528c478cd0a0/manager/0.log" Feb 17 14:36:14 crc kubenswrapper[4836]: I0217 14:36:14.643519 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-f4fvp_a1ae24b8-83c8-416d-9d39-24d84eb6cd83/manager/0.log" Feb 17 14:36:14 crc kubenswrapper[4836]: I0217 14:36:14.771485 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-k9p46_e805966b-ea22-4c2a-a6c4-3622300fcb2f/manager/0.log" Feb 17 14:36:14 crc kubenswrapper[4836]: I0217 14:36:14.976762 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-qnb5b_18a63480-edc2-44ed-bd43-b7750f7f8f33/manager/0.log" Feb 17 14:36:15 crc kubenswrapper[4836]: I0217 14:36:15.086920 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-6lzts_9ccd7ed5-2772-4482-af31-2578e98011fd/manager/0.log" Feb 17 14:36:15 crc kubenswrapper[4836]: I0217 14:36:15.693855 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-zkzrs_7b9749c7-038f-4814-9357-623346c9172c/manager/0.log" Feb 17 14:36:15 crc kubenswrapper[4836]: I0217 14:36:15.804932 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-6c4rn_3d12b131-73a0-477e-ab9e-579309b0f5b1/manager/0.log" Feb 17 14:36:16 crc kubenswrapper[4836]: I0217 14:36:16.070575 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-5hz7c_52a90e1a-0e2d-4488-8a1a-34de15bfa3a5/manager/0.log" Feb 17 14:36:16 crc kubenswrapper[4836]: I0217 14:36:16.381424 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ct7xht_4affaaf4-1113-4635-b30f-da26e04f6662/manager/0.log" Feb 17 14:36:17 crc kubenswrapper[4836]: I0217 14:36:17.179057 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7464dc569f-6nqxk_4afa09e7-5273-4170-8c40-6c3ed66e6b8e/operator/0.log" Feb 17 14:36:17 crc kubenswrapper[4836]: I0217 14:36:17.428861 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pz5pz_f0982db9-e1ef-4fc9-b7d4-e52ac91e6676/registry-server/0.log" Feb 17 14:36:17 crc kubenswrapper[4836]: I0217 14:36:17.568181 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:36:17 crc kubenswrapper[4836]: E0217 14:36:17.568550 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:36:17 crc kubenswrapper[4836]: I0217 14:36:17.790037 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-mq76b_f6ba6343-872d-4e36-accf-959bb437f82d/manager/0.log" Feb 17 14:36:18 crc kubenswrapper[4836]: I0217 14:36:18.565220 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-jnxzt_cf7c4631-b19a-4160-8581-15f72869a60b/manager/0.log" Feb 17 14:36:18 crc kubenswrapper[4836]: I0217 14:36:18.614852 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-llzlm_1bb12b86-1f25-4dd9-a44d-449a6deee701/manager/0.log" Feb 17 14:36:18 crc kubenswrapper[4836]: I0217 14:36:18.911775 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-w4dds_d423f7ba-2751-4d99-8102-3bc52b302161/operator/0.log" Feb 17 14:36:19 crc kubenswrapper[4836]: I0217 14:36:19.039747 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-7ktgs_d0c3c41c-ac60-40f0-bdfb-8fe641c9426a/manager/0.log" Feb 17 14:36:19 crc kubenswrapper[4836]: I0217 14:36:19.193966 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-667f54696f-kskgn_ca022a36-1c0e-4d3b-a6cf-87f4a78cfd48/manager/0.log" Feb 17 14:36:19 crc kubenswrapper[4836]: I0217 14:36:19.512608 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-ztvz2_d4aa765a-0f56-4f05-b02f-f041841bc97d/manager/0.log" Feb 17 14:36:19 crc kubenswrapper[4836]: I0217 14:36:19.661789 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-lmtng_1f238b1a-4c0c-45de-bb7a-12946f426b89/manager/0.log" Feb 17 14:36:19 crc kubenswrapper[4836]: I0217 14:36:19.774248 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d6964fcdb-rbq62_a3c22d9b-6ba0-4dd2-861d-8685c18e9330/manager/0.log" Feb 17 14:36:22 crc kubenswrapper[4836]: I0217 14:36:22.177055 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-54696_a7c6acc7-4243-4c0d-a723-e83dc2e054df/manager/0.log" Feb 17 14:36:23 crc kubenswrapper[4836]: I0217 14:36:23.880468 4836 scope.go:117] "RemoveContainer" containerID="f0419b2e3c8ef9c0f54a84e7512a9cde99f00cb5aa7b44a637e787be45f07ccd" Feb 17 14:36:23 crc kubenswrapper[4836]: I0217 14:36:23.914313 4836 scope.go:117] "RemoveContainer" containerID="fcda893980936a4e72f451c12f1e7a2007edb9f1324581557ec99a4e77ee81f9" Feb 17 14:36:23 crc kubenswrapper[4836]: I0217 14:36:23.950286 4836 scope.go:117] "RemoveContainer" containerID="d3297c8494404e0f55bc6c3d7032d9a3295e84dc803655d2e2df3e6ab7a747be" Feb 17 14:36:23 crc kubenswrapper[4836]: I0217 14:36:23.982089 4836 scope.go:117] "RemoveContainer" containerID="4337aced693eb74520c39cdaad50c2d06e723483b872e61eb2f707cc9550085e" Feb 17 14:36:28 crc kubenswrapper[4836]: I0217 14:36:28.568955 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:36:28 crc kubenswrapper[4836]: E0217 14:36:28.569993 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:36:39 crc kubenswrapper[4836]: I0217 14:36:39.569429 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:36:39 crc kubenswrapper[4836]: E0217 14:36:39.570315 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:36:44 crc kubenswrapper[4836]: I0217 14:36:44.059274 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pn587"] Feb 17 14:36:44 crc kubenswrapper[4836]: I0217 14:36:44.074039 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pn587"] Feb 17 14:36:44 crc kubenswrapper[4836]: I0217 14:36:44.583654 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b" path="/var/lib/kubelet/pods/77cd04d8-7d93-4b49-b91b-8cc5fdf8f53b/volumes" Feb 17 14:36:45 crc kubenswrapper[4836]: I0217 14:36:45.058883 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hx7tv"] Feb 17 14:36:45 crc kubenswrapper[4836]: I0217 14:36:45.071655 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hx7tv"] Feb 17 14:36:46 crc kubenswrapper[4836]: I0217 14:36:46.583960 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add50d48-0a1c-4d2f-bcc3-ae9355e95c3b" path="/var/lib/kubelet/pods/add50d48-0a1c-4d2f-bcc3-ae9355e95c3b/volumes" Feb 17 14:36:47 crc kubenswrapper[4836]: I0217 14:36:47.043021 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d162-account-create-update-khb5j"] Feb 17 14:36:47 crc kubenswrapper[4836]: I0217 14:36:47.054869 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-83de-account-create-update-fh75b"] Feb 17 14:36:47 crc kubenswrapper[4836]: I0217 14:36:47.067890 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d8f3-account-create-update-kmlvm"] Feb 17 14:36:47 crc kubenswrapper[4836]: I0217 14:36:47.083851 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d162-account-create-update-khb5j"] Feb 17 14:36:47 crc kubenswrapper[4836]: I0217 14:36:47.095150 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d8f3-account-create-update-kmlvm"] Feb 17 14:36:47 crc kubenswrapper[4836]: I0217 14:36:47.106096 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-k7zc9"] Feb 17 14:36:47 crc kubenswrapper[4836]: I0217 14:36:47.117332 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-83de-account-create-update-fh75b"] Feb 17 14:36:47 crc kubenswrapper[4836]: I0217 14:36:47.127841 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-k7zc9"] Feb 17 14:36:48 crc kubenswrapper[4836]: I0217 14:36:48.585398 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5" path="/var/lib/kubelet/pods/1a5ee61c-8aa3-4bb4-a7da-2d61ca1561f5/volumes" Feb 17 14:36:48 crc kubenswrapper[4836]: I0217 14:36:48.586501 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae1659d-7892-4744-a570-4ba7c65e4caf" path="/var/lib/kubelet/pods/2ae1659d-7892-4744-a570-4ba7c65e4caf/volumes" Feb 17 14:36:48 crc kubenswrapper[4836]: I0217 14:36:48.587409 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54905e17-d443-4465-8f70-7be04a89086f" path="/var/lib/kubelet/pods/54905e17-d443-4465-8f70-7be04a89086f/volumes" Feb 17 14:36:48 crc kubenswrapper[4836]: I0217 14:36:48.588147 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e562d506-21d2-4edd-90b8-97bd11bf068e" path="/var/lib/kubelet/pods/e562d506-21d2-4edd-90b8-97bd11bf068e/volumes" Feb 17 14:36:48 crc kubenswrapper[4836]: I0217 14:36:48.807664 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jhzxl_cea58b47-da5e-4dc7-be23-19d8408318d7/control-plane-machine-set-operator/0.log" Feb 17 14:36:49 crc kubenswrapper[4836]: I0217 14:36:49.437210 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jjmwc_1ecc7c98-e9a3-4850-a741-7e0bcf670e27/machine-api-operator/0.log" Feb 17 14:36:49 crc kubenswrapper[4836]: I0217 14:36:49.444465 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jjmwc_1ecc7c98-e9a3-4850-a741-7e0bcf670e27/kube-rbac-proxy/0.log" Feb 17 14:36:52 crc kubenswrapper[4836]: I0217 14:36:52.575104 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:36:52 crc kubenswrapper[4836]: E0217 14:36:52.576090 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:37:00 crc kubenswrapper[4836]: I0217 14:37:00.062271 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h9gmq"] Feb 17 14:37:00 crc kubenswrapper[4836]: I0217 14:37:00.089775 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h9gmq"] Feb 17 14:37:00 crc kubenswrapper[4836]: I0217 14:37:00.581218 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa6524b-2b3f-47c3-b55f-1435685df59d" path="/var/lib/kubelet/pods/caa6524b-2b3f-47c3-b55f-1435685df59d/volumes" Feb 17 14:37:04 crc kubenswrapper[4836]: I0217 14:37:04.586543 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:37:04 crc kubenswrapper[4836]: E0217 14:37:04.588157 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:37:06 crc kubenswrapper[4836]: I0217 14:37:06.078181 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vtfx4_63f75031-4e24-42f7-80cc-2f3fb289dac0/cert-manager-controller/0.log" Feb 17 14:37:06 crc kubenswrapper[4836]: I0217 14:37:06.230619 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dmddv_918985c6-76a8-4bb2-8868-278b633133a9/cert-manager-cainjector/0.log" Feb 17 14:37:06 crc kubenswrapper[4836]: I0217 14:37:06.331734 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-zhbzj_662067b4-39c2-4ab7-adb4-ba8a6330b0b9/cert-manager-webhook/0.log" Feb 17 14:37:16 crc kubenswrapper[4836]: I0217 14:37:16.568269 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:37:16 crc kubenswrapper[4836]: E0217 14:37:16.569452 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:37:21 crc kubenswrapper[4836]: I0217 14:37:21.601523 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-q985f_8fc6d41c-a8a1-4fe3-ade2-b79761920b17/nmstate-console-plugin/0.log" Feb 17 14:37:21 crc kubenswrapper[4836]: I0217 14:37:21.789245 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-w8wbg_9ff842c9-08b8-4363-b82a-5f7e2461ec2a/nmstate-handler/0.log" Feb 17 14:37:21 crc kubenswrapper[4836]: I0217 14:37:21.824099 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-877xf_0d0615b5-ef3b-4932-957c-a4b44f35c1a9/kube-rbac-proxy/0.log" Feb 17 14:37:21 crc kubenswrapper[4836]: I0217 14:37:21.926025 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-877xf_0d0615b5-ef3b-4932-957c-a4b44f35c1a9/nmstate-metrics/0.log" Feb 17 14:37:22 crc kubenswrapper[4836]: I0217 14:37:22.042528 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-14cb-account-create-update-xw2dd"] Feb 17 14:37:22 crc kubenswrapper[4836]: I0217 14:37:22.048061 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-9w75g_c190e38d-4893-49c9-a633-e6b912030d37/nmstate-operator/0.log" Feb 17 14:37:22 crc kubenswrapper[4836]: I0217 14:37:22.060029 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nwjd8"] Feb 17 14:37:22 crc kubenswrapper[4836]: I0217 14:37:22.071626 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-14cb-account-create-update-xw2dd"] Feb 17 14:37:22 crc kubenswrapper[4836]: I0217 14:37:22.086316 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nwjd8"] Feb 17 14:37:22 crc kubenswrapper[4836]: I0217 14:37:22.251534 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-52vj8_6d6a6ca4-12c5-4bc1-b67e-5a48d1fe86f8/nmstate-webhook/0.log" Feb 17 14:37:22 crc kubenswrapper[4836]: I0217 14:37:22.584821 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623225aa-2492-494e-be5b-92acef6f23cf" path="/var/lib/kubelet/pods/623225aa-2492-494e-be5b-92acef6f23cf/volumes" Feb 17 14:37:22 crc kubenswrapper[4836]: I0217 14:37:22.585833 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ce1c7a-57e8-491e-84ab-8aed8baea37b" path="/var/lib/kubelet/pods/d4ce1c7a-57e8-491e-84ab-8aed8baea37b/volumes" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.054785 4836 scope.go:117] "RemoveContainer" containerID="b3fd8198bda32089f8d16c7005023bc9355442a69582a28217b4faa19a58edfd" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.093049 4836 scope.go:117] "RemoveContainer" containerID="bda2c6a640050c54150d82f44c6e78a2f7107b79ee0b4f6fd03e4d8c6e1019d3" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.150822 4836 scope.go:117] "RemoveContainer" containerID="8f88022ab4daa99006c48416f95fa6fcf0ec231af3f8553f0fffe8cc8f1971ee" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.197129 4836 scope.go:117] "RemoveContainer" containerID="55c6c8d1d911f68476c5d07d35dec7d57e500cdc1c29d64681255555160897dd" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.285733 4836 scope.go:117] "RemoveContainer" containerID="ca8e0602e1b36f3c2d9bfabc7020988df18e6945d19646bd583313467d47a539" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.321588 4836 scope.go:117] "RemoveContainer" containerID="c0e6439979838c98e66157164ef8073f70f7245c52bc8c72b4753a2777fab786" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.374434 4836 scope.go:117] "RemoveContainer" containerID="7e6f04d96e5a077df5020259f367870723b0f91e790c0b81e936bf2cbc3790f9" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.399478 4836 scope.go:117] "RemoveContainer" containerID="0179fb4c7564ecef52fa63a2f91fe687b3340cb3f7aaa46ff46f4ec68e5ee26d" Feb 17 14:37:24 crc kubenswrapper[4836]: I0217 14:37:24.431052 4836 scope.go:117] "RemoveContainer" containerID="bf410eadcd21b6c409b08a23916bc0ac4d5ba43505387a89c251ab098b87e562" Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.074806 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-2ea0-account-create-update-p7p99"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.085832 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-jjrp2"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.097011 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0d11-account-create-update-jf72z"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.107274 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-69hk6"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.117060 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-652c-account-create-update-lswdv"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.128578 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-w5qdk"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.141365 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-69hk6"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.150965 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-2ea0-account-create-update-p7p99"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.160246 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-652c-account-create-update-lswdv"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.171345 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-w5qdk"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.183245 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0d11-account-create-update-jf72z"] Feb 17 14:37:29 crc kubenswrapper[4836]: I0217 14:37:29.194874 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-jjrp2"] Feb 17 14:37:30 crc kubenswrapper[4836]: I0217 14:37:30.569503 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:37:30 crc kubenswrapper[4836]: E0217 14:37:30.570187 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:37:30 crc kubenswrapper[4836]: I0217 14:37:30.582267 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee1a0f2-86df-4f97-957a-22bbd7da4505" path="/var/lib/kubelet/pods/2ee1a0f2-86df-4f97-957a-22bbd7da4505/volumes" Feb 17 14:37:30 crc kubenswrapper[4836]: I0217 14:37:30.583394 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4edeb89f-0bd9-466e-a9f9-2d45575d2c72" path="/var/lib/kubelet/pods/4edeb89f-0bd9-466e-a9f9-2d45575d2c72/volumes" Feb 17 14:37:30 crc kubenswrapper[4836]: I0217 14:37:30.584174 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767841a7-db94-430a-b408-10e5bd0350e5" path="/var/lib/kubelet/pods/767841a7-db94-430a-b408-10e5bd0350e5/volumes" Feb 17 14:37:30 crc kubenswrapper[4836]: I0217 14:37:30.584864 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fe36f3-d6b6-44e0-b85b-6def754fd08e" path="/var/lib/kubelet/pods/a1fe36f3-d6b6-44e0-b85b-6def754fd08e/volumes" Feb 17 14:37:30 crc kubenswrapper[4836]: I0217 14:37:30.586230 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb354e85-311d-40bb-ae4a-5c535d4d89b9" path="/var/lib/kubelet/pods/eb354e85-311d-40bb-ae4a-5c535d4d89b9/volumes" Feb 17 14:37:30 crc kubenswrapper[4836]: I0217 14:37:30.587169 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ee15e8-6695-454f-83ad-d54176458497" path="/var/lib/kubelet/pods/f9ee15e8-6695-454f-83ad-d54176458497/volumes" Feb 17 14:37:36 crc kubenswrapper[4836]: I0217 14:37:36.040072 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-q25rr"] Feb 17 14:37:36 crc kubenswrapper[4836]: I0217 14:37:36.054874 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-q25rr"] Feb 17 14:37:36 crc kubenswrapper[4836]: I0217 14:37:36.582199 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1d4ef8-03d9-42d8-ae0b-9410767ed25f" path="/var/lib/kubelet/pods/6a1d4ef8-03d9-42d8-ae0b-9410767ed25f/volumes" Feb 17 14:37:36 crc kubenswrapper[4836]: I0217 14:37:36.728583 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-dfd4b8c4b-kclf7_297a6b35-d11d-4c2b-858c-79cb4c3c1b2c/kube-rbac-proxy/0.log" Feb 17 14:37:36 crc kubenswrapper[4836]: I0217 14:37:36.774243 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-dfd4b8c4b-kclf7_297a6b35-d11d-4c2b-858c-79cb4c3c1b2c/manager/0.log" Feb 17 14:37:41 crc kubenswrapper[4836]: I0217 14:37:41.567949 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:37:41 crc kubenswrapper[4836]: E0217 14:37:41.568739 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:37:51 crc kubenswrapper[4836]: I0217 14:37:51.472661 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xm2rk_755bc851-3fff-45db-bbcf-164a27afcf85/prometheus-operator/0.log" Feb 17 14:37:51 crc kubenswrapper[4836]: I0217 14:37:51.585498 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_5a9fdae1-f115-4e94-9b72-026862e02026/prometheus-operator-admission-webhook/0.log" Feb 17 14:37:51 crc kubenswrapper[4836]: I0217 14:37:51.702173 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_ce0a3fd2-d84a-417c-bd46-c0dba979376e/prometheus-operator-admission-webhook/0.log" Feb 17 14:37:51 crc kubenswrapper[4836]: I0217 14:37:51.834036 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-f94f2_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578/operator/0.log" Feb 17 14:37:51 crc kubenswrapper[4836]: I0217 14:37:51.979915 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-vqhkf_c4b6d996-7a86-4512-825f-6e6d34148862/perses-operator/0.log" Feb 17 14:37:53 crc kubenswrapper[4836]: I0217 14:37:53.568774 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:37:53 crc kubenswrapper[4836]: E0217 14:37:53.569389 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:37:55 crc kubenswrapper[4836]: I0217 14:37:55.039268 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-z8g7x"] Feb 17 14:37:55 crc kubenswrapper[4836]: I0217 14:37:55.064048 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-z8g7x"] Feb 17 14:37:56 crc kubenswrapper[4836]: I0217 14:37:56.580037 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3a6cf1-bca0-45b2-9f7c-6d483452d49d" path="/var/lib/kubelet/pods/df3a6cf1-bca0-45b2-9f7c-6d483452d49d/volumes" Feb 17 14:38:06 crc kubenswrapper[4836]: I0217 14:38:06.573702 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:38:06 crc kubenswrapper[4836]: E0217 14:38:06.574734 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:38:06 crc kubenswrapper[4836]: I0217 14:38:06.623269 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-szl4j_27eed55a-1a00-497e-9aa4-74f7007f336e/kube-rbac-proxy/0.log" Feb 17 14:38:06 crc kubenswrapper[4836]: I0217 14:38:06.646415 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-szl4j_27eed55a-1a00-497e-9aa4-74f7007f336e/controller/0.log" Feb 17 14:38:06 crc kubenswrapper[4836]: I0217 14:38:06.842726 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-mznjt_18ec2995-af0c-4c47-aa70-480f9323329e/frr-k8s-webhook-server/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.045005 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-frr-files/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.242572 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-reloader/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.247455 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-frr-files/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.261660 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-reloader/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.269693 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-metrics/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.479477 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-reloader/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.503916 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-metrics/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.556587 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-metrics/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.565544 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-frr-files/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.804133 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-reloader/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.829663 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-metrics/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.830954 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/cp-frr-files/0.log" Feb 17 14:38:07 crc kubenswrapper[4836]: I0217 14:38:07.891367 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/controller/0.log" Feb 17 14:38:08 crc kubenswrapper[4836]: I0217 14:38:08.052814 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/frr-metrics/0.log" Feb 17 14:38:08 crc kubenswrapper[4836]: I0217 14:38:08.094220 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/kube-rbac-proxy/0.log" Feb 17 14:38:08 crc kubenswrapper[4836]: I0217 14:38:08.161029 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/kube-rbac-proxy-frr/0.log" Feb 17 14:38:08 crc kubenswrapper[4836]: I0217 14:38:08.306359 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/reloader/0.log" Feb 17 14:38:08 crc kubenswrapper[4836]: I0217 14:38:08.447570 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69b9cbf5df-6fkqt_ccb35f40-d0b8-4a1e-8c45-63dd6987b72c/manager/0.log" Feb 17 14:38:08 crc kubenswrapper[4836]: I0217 14:38:08.641091 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-856546fc87-n5vrx_16c736d5-389e-4d03-9657-1abcd4448953/webhook-server/0.log" Feb 17 14:38:08 crc kubenswrapper[4836]: I0217 14:38:08.889667 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pb5ff_2690ef6e-0489-43f3-b787-8b6c1295e283/kube-rbac-proxy/0.log" Feb 17 14:38:09 crc kubenswrapper[4836]: I0217 14:38:09.142601 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x257b_e019f338-ff73-4160-a283-a71e9e6119b3/frr/0.log" Feb 17 14:38:09 crc kubenswrapper[4836]: I0217 14:38:09.375632 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pb5ff_2690ef6e-0489-43f3-b787-8b6c1295e283/speaker/0.log" Feb 17 14:38:13 crc kubenswrapper[4836]: I0217 14:38:13.059214 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sb6h7"] Feb 17 14:38:13 crc kubenswrapper[4836]: I0217 14:38:13.067864 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sb6h7"] Feb 17 14:38:14 crc kubenswrapper[4836]: I0217 14:38:14.612009 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ddbaec-f370-44a3-802b-26980ea65d2f" path="/var/lib/kubelet/pods/81ddbaec-f370-44a3-802b-26980ea65d2f/volumes" Feb 17 14:38:18 crc kubenswrapper[4836]: I0217 14:38:18.568755 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:38:18 crc kubenswrapper[4836]: E0217 14:38:18.569720 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:38:22 crc kubenswrapper[4836]: I0217 14:38:22.626598 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf_3464477d-9902-4d40-9048-443132123fb3/util/0.log" Feb 17 14:38:22 crc kubenswrapper[4836]: I0217 14:38:22.790709 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf_3464477d-9902-4d40-9048-443132123fb3/util/0.log" Feb 17 14:38:22 crc kubenswrapper[4836]: I0217 14:38:22.798103 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf_3464477d-9902-4d40-9048-443132123fb3/pull/0.log" Feb 17 14:38:22 crc kubenswrapper[4836]: I0217 14:38:22.852083 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf_3464477d-9902-4d40-9048-443132123fb3/pull/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.129341 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf_3464477d-9902-4d40-9048-443132123fb3/util/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.130421 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf_3464477d-9902-4d40-9048-443132123fb3/pull/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.147516 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jtdwf_3464477d-9902-4d40-9048-443132123fb3/extract/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.321109 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd_f611c52f-90dc-454e-8c3c-ca9d6a915f58/util/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.521923 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd_f611c52f-90dc-454e-8c3c-ca9d6a915f58/util/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.528213 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd_f611c52f-90dc-454e-8c3c-ca9d6a915f58/pull/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.536626 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd_f611c52f-90dc-454e-8c3c-ca9d6a915f58/pull/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.730520 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd_f611c52f-90dc-454e-8c3c-ca9d6a915f58/util/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.759430 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd_f611c52f-90dc-454e-8c3c-ca9d6a915f58/pull/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.770893 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j84kd_f611c52f-90dc-454e-8c3c-ca9d6a915f58/extract/0.log" Feb 17 14:38:23 crc kubenswrapper[4836]: I0217 14:38:23.934089 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl_5939eb42-42be-4ecf-845a-c28b4669c02d/util/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.128411 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl_5939eb42-42be-4ecf-845a-c28b4669c02d/util/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.130487 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl_5939eb42-42be-4ecf-845a-c28b4669c02d/pull/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.171827 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl_5939eb42-42be-4ecf-845a-c28b4669c02d/pull/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.322550 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl_5939eb42-42be-4ecf-845a-c28b4669c02d/util/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.338151 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl_5939eb42-42be-4ecf-845a-c28b4669c02d/pull/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.341093 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cwqrl_5939eb42-42be-4ecf-845a-c28b4669c02d/extract/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.564951 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xxj4j_eaecd71b-3b00-427a-9654-9d04af5469b9/extract-utilities/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.646079 4836 scope.go:117] "RemoveContainer" containerID="35ecf820b0414db1c94b077c083568db5d4a957bb9d735db9d4e378b6ebbc861" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.708333 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xxj4j_eaecd71b-3b00-427a-9654-9d04af5469b9/extract-utilities/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.732602 4836 scope.go:117] "RemoveContainer" containerID="3ae7c112e0518db5ada6508ad8c57217e914b3d3401ff927d4aa18b2e2dd9f79" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.772488 4836 scope.go:117] "RemoveContainer" containerID="0112cdba6fc4f4acf8102f48cb77deaeb49a0b5c8b49e3c6adcdb559d7e100b6" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.777641 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xxj4j_eaecd71b-3b00-427a-9654-9d04af5469b9/extract-content/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.779480 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xxj4j_eaecd71b-3b00-427a-9654-9d04af5469b9/extract-content/0.log" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.822065 4836 scope.go:117] "RemoveContainer" containerID="5e36e16a50074efc0038c12585afeefa45bc968423f053fecc01a7a460fc9fd3" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.895350 4836 scope.go:117] "RemoveContainer" containerID="e3b5cb6d26fdb2e586683ff31b8abe63df8d533a376c42dd280747ab5e165f5e" Feb 17 14:38:24 crc kubenswrapper[4836]: I0217 14:38:24.943228 4836 scope.go:117] "RemoveContainer" containerID="515b55d1439f54ad3649999fcf112b0e86238d037ec2170a1978295a22c02429" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.030427 4836 scope.go:117] "RemoveContainer" containerID="7f08e0024064e8fd1c473afb57d745eb10366b72696b8824621db71657c54472" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.084708 4836 scope.go:117] "RemoveContainer" containerID="86d009aabc2aafe94768037f28b03b96d85141a639669b82cdbd2fa653d9696d" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.117411 4836 scope.go:117] "RemoveContainer" containerID="2953db160f228060c084b5fd479ec149c2b0acd6cacae4957fb68229d08ae1b9" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.134814 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xxj4j_eaecd71b-3b00-427a-9654-9d04af5469b9/extract-content/0.log" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.210720 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xxj4j_eaecd71b-3b00-427a-9654-9d04af5469b9/extract-utilities/0.log" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.403709 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zjxvt_212802dd-4c4f-444a-b443-bc3bbd1431bc/extract-utilities/0.log" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.527811 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xxj4j_eaecd71b-3b00-427a-9654-9d04af5469b9/registry-server/0.log" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.658895 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zjxvt_212802dd-4c4f-444a-b443-bc3bbd1431bc/extract-content/0.log" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.661672 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zjxvt_212802dd-4c4f-444a-b443-bc3bbd1431bc/extract-utilities/0.log" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.742549 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zjxvt_212802dd-4c4f-444a-b443-bc3bbd1431bc/extract-content/0.log" Feb 17 14:38:25 crc kubenswrapper[4836]: I0217 14:38:25.985862 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zjxvt_212802dd-4c4f-444a-b443-bc3bbd1431bc/extract-utilities/0.log" Feb 17 14:38:26 crc kubenswrapper[4836]: I0217 14:38:26.143192 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zjxvt_212802dd-4c4f-444a-b443-bc3bbd1431bc/extract-content/0.log" Feb 17 14:38:26 crc kubenswrapper[4836]: I0217 14:38:26.385396 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz_96be2236-f07d-4944-8afa-b15a4ce0c4f0/util/0.log" Feb 17 14:38:26 crc kubenswrapper[4836]: I0217 14:38:26.675207 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zjxvt_212802dd-4c4f-444a-b443-bc3bbd1431bc/registry-server/0.log" Feb 17 14:38:26 crc kubenswrapper[4836]: I0217 14:38:26.687258 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz_96be2236-f07d-4944-8afa-b15a4ce0c4f0/pull/0.log" Feb 17 14:38:26 crc kubenswrapper[4836]: I0217 14:38:26.719569 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz_96be2236-f07d-4944-8afa-b15a4ce0c4f0/util/0.log" Feb 17 14:38:26 crc kubenswrapper[4836]: I0217 14:38:26.721372 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz_96be2236-f07d-4944-8afa-b15a4ce0c4f0/pull/0.log" Feb 17 14:38:26 crc kubenswrapper[4836]: I0217 14:38:26.962673 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz_96be2236-f07d-4944-8afa-b15a4ce0c4f0/pull/0.log" Feb 17 14:38:26 crc kubenswrapper[4836]: I0217 14:38:26.990903 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz_96be2236-f07d-4944-8afa-b15a4ce0c4f0/util/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.005112 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rhsgl_bd68f8c7-fdcc-449d-9f92-2f7afcb4917b/marketplace-operator/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.005345 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawmbrz_96be2236-f07d-4944-8afa-b15a4ce0c4f0/extract/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.142968 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8gtc9_8fb3c078-0953-4561-a532-cc25ff32d845/extract-utilities/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.360970 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8gtc9_8fb3c078-0953-4561-a532-cc25ff32d845/extract-utilities/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.375238 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8gtc9_8fb3c078-0953-4561-a532-cc25ff32d845/extract-content/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.404419 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8gtc9_8fb3c078-0953-4561-a532-cc25ff32d845/extract-content/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.559217 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8gtc9_8fb3c078-0953-4561-a532-cc25ff32d845/extract-utilities/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.631498 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8gtc9_8fb3c078-0953-4561-a532-cc25ff32d845/extract-content/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.631597 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r5vl4_5d52263a-9417-43b6-903c-79e41b1200a0/extract-utilities/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.714076 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8gtc9_8fb3c078-0953-4561-a532-cc25ff32d845/registry-server/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.864534 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r5vl4_5d52263a-9417-43b6-903c-79e41b1200a0/extract-utilities/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.886745 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r5vl4_5d52263a-9417-43b6-903c-79e41b1200a0/extract-content/0.log" Feb 17 14:38:27 crc kubenswrapper[4836]: I0217 14:38:27.942506 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r5vl4_5d52263a-9417-43b6-903c-79e41b1200a0/extract-content/0.log" Feb 17 14:38:28 crc kubenswrapper[4836]: I0217 14:38:28.090480 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r5vl4_5d52263a-9417-43b6-903c-79e41b1200a0/extract-content/0.log" Feb 17 14:38:28 crc kubenswrapper[4836]: I0217 14:38:28.118415 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r5vl4_5d52263a-9417-43b6-903c-79e41b1200a0/extract-utilities/0.log" Feb 17 14:38:28 crc kubenswrapper[4836]: I0217 14:38:28.275548 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r5vl4_5d52263a-9417-43b6-903c-79e41b1200a0/registry-server/0.log" Feb 17 14:38:29 crc kubenswrapper[4836]: I0217 14:38:29.064375 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-pdhxs"] Feb 17 14:38:29 crc kubenswrapper[4836]: I0217 14:38:29.085928 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vmgps"] Feb 17 14:38:29 crc kubenswrapper[4836]: I0217 14:38:29.101456 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-pdhxs"] Feb 17 14:38:29 crc kubenswrapper[4836]: I0217 14:38:29.116904 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vmgps"] Feb 17 14:38:30 crc kubenswrapper[4836]: I0217 14:38:30.038585 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g9l4s"] Feb 17 14:38:30 crc kubenswrapper[4836]: I0217 14:38:30.053909 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g9l4s"] Feb 17 14:38:30 crc kubenswrapper[4836]: I0217 14:38:30.568558 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:38:30 crc kubenswrapper[4836]: E0217 14:38:30.569332 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:38:30 crc kubenswrapper[4836]: I0217 14:38:30.588189 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10331926-261d-4e44-a8c2-89846903ca12" path="/var/lib/kubelet/pods/10331926-261d-4e44-a8c2-89846903ca12/volumes" Feb 17 14:38:30 crc kubenswrapper[4836]: I0217 14:38:30.589189 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18361bc2-5db1-4611-be18-38593e0b5d5d" path="/var/lib/kubelet/pods/18361bc2-5db1-4611-be18-38593e0b5d5d/volumes" Feb 17 14:38:30 crc kubenswrapper[4836]: I0217 14:38:30.590264 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fe4b42c-afbf-41e1-8035-5fffb156eadc" path="/var/lib/kubelet/pods/1fe4b42c-afbf-41e1-8035-5fffb156eadc/volumes" Feb 17 14:38:40 crc kubenswrapper[4836]: I0217 14:38:40.039235 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qqwhc"] Feb 17 14:38:40 crc kubenswrapper[4836]: I0217 14:38:40.065892 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qqwhc"] Feb 17 14:38:40 crc kubenswrapper[4836]: I0217 14:38:40.579535 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8185c649-f1ad-4230-830d-07d002e5b358" path="/var/lib/kubelet/pods/8185c649-f1ad-4230-830d-07d002e5b358/volumes" Feb 17 14:38:41 crc kubenswrapper[4836]: I0217 14:38:41.490514 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b474d8486-lngwm_5a9fdae1-f115-4e94-9b72-026862e02026/prometheus-operator-admission-webhook/0.log" Feb 17 14:38:41 crc kubenswrapper[4836]: I0217 14:38:41.510149 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xm2rk_755bc851-3fff-45db-bbcf-164a27afcf85/prometheus-operator/0.log" Feb 17 14:38:41 crc kubenswrapper[4836]: I0217 14:38:41.544202 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b474d8486-m8jhr_ce0a3fd2-d84a-417c-bd46-c0dba979376e/prometheus-operator-admission-webhook/0.log" Feb 17 14:38:41 crc kubenswrapper[4836]: I0217 14:38:41.704329 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-vqhkf_c4b6d996-7a86-4512-825f-6e6d34148862/perses-operator/0.log" Feb 17 14:38:41 crc kubenswrapper[4836]: I0217 14:38:41.723895 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-f94f2_d6acbcf2-dfc0-4a7b-b6bd-4b62c0b03578/operator/0.log" Feb 17 14:38:42 crc kubenswrapper[4836]: I0217 14:38:42.568824 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:38:42 crc kubenswrapper[4836]: E0217 14:38:42.569535 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:38:51 crc kubenswrapper[4836]: I0217 14:38:51.041517 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-pvljf"] Feb 17 14:38:51 crc kubenswrapper[4836]: I0217 14:38:51.058370 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-pvljf"] Feb 17 14:38:52 crc kubenswrapper[4836]: I0217 14:38:52.581816 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e016162-2025-44ad-989d-ce71d9f8f9bf" path="/var/lib/kubelet/pods/4e016162-2025-44ad-989d-ce71d9f8f9bf/volumes" Feb 17 14:38:55 crc kubenswrapper[4836]: I0217 14:38:55.433818 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-dfd4b8c4b-kclf7_297a6b35-d11d-4c2b-858c-79cb4c3c1b2c/kube-rbac-proxy/0.log" Feb 17 14:38:55 crc kubenswrapper[4836]: I0217 14:38:55.481822 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-dfd4b8c4b-kclf7_297a6b35-d11d-4c2b-858c-79cb4c3c1b2c/manager/0.log" Feb 17 14:38:57 crc kubenswrapper[4836]: I0217 14:38:57.568403 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:38:57 crc kubenswrapper[4836]: E0217 14:38:57.569320 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:39:08 crc kubenswrapper[4836]: I0217 14:39:08.088817 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-9z4jp"] Feb 17 14:39:08 crc kubenswrapper[4836]: I0217 14:39:08.101202 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-9z4jp"] Feb 17 14:39:08 crc kubenswrapper[4836]: I0217 14:39:08.580035 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f38b5f94-bc8b-4e64-abe6-8c39b920cb4b" path="/var/lib/kubelet/pods/f38b5f94-bc8b-4e64-abe6-8c39b920cb4b/volumes" Feb 17 14:39:09 crc kubenswrapper[4836]: I0217 14:39:09.569228 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:39:10 crc kubenswrapper[4836]: I0217 14:39:10.747642 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"6f19bb9a4d6443b07f247471c35e97a577b83e39d81d033aff596fac57089969"} Feb 17 14:39:25 crc kubenswrapper[4836]: I0217 14:39:25.385998 4836 scope.go:117] "RemoveContainer" containerID="ff24c89536ae06cf6a0fbffcb68050de3e8ed22356c912b4e7e87afbef99480d" Feb 17 14:39:25 crc kubenswrapper[4836]: I0217 14:39:25.439165 4836 scope.go:117] "RemoveContainer" containerID="852265bc6ffb6ef9657692f454a84caf832b683e76f800e8dccb3317d95a69ea" Feb 17 14:39:25 crc kubenswrapper[4836]: I0217 14:39:25.490683 4836 scope.go:117] "RemoveContainer" containerID="fc7f81c47e20cce7a74c227545b963bd61d6dadbccf7dacfaa97a9b912354775" Feb 17 14:39:25 crc kubenswrapper[4836]: I0217 14:39:25.531115 4836 scope.go:117] "RemoveContainer" containerID="705f230fd2d44c1059294c17cc5410cef58dcabc1573c4e7f4f531d00aad46ec" Feb 17 14:39:25 crc kubenswrapper[4836]: I0217 14:39:25.598606 4836 scope.go:117] "RemoveContainer" containerID="0a4b8ba8b2087b1a38486d6f6172aee2da2f8fb8e22feee2e93bb22306b6558e" Feb 17 14:39:25 crc kubenswrapper[4836]: I0217 14:39:25.661804 4836 scope.go:117] "RemoveContainer" containerID="13ef4f24a42269dbbf22aa927159da757007caa607e5236e1441cff6b685fe12" Feb 17 14:39:45 crc kubenswrapper[4836]: I0217 14:39:45.055901 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-q8wrd"] Feb 17 14:39:45 crc kubenswrapper[4836]: I0217 14:39:45.072463 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-q8wrd"] Feb 17 14:39:46 crc kubenswrapper[4836]: I0217 14:39:46.039963 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-npl52"] Feb 17 14:39:46 crc kubenswrapper[4836]: I0217 14:39:46.052455 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-npl52"] Feb 17 14:39:46 crc kubenswrapper[4836]: I0217 14:39:46.580724 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b1aa3a-dc15-4ec1-ba76-8246e300422f" path="/var/lib/kubelet/pods/88b1aa3a-dc15-4ec1-ba76-8246e300422f/volumes" Feb 17 14:39:46 crc kubenswrapper[4836]: I0217 14:39:46.581530 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db342a3d-55f5-4b0c-b96f-327014b6fb82" path="/var/lib/kubelet/pods/db342a3d-55f5-4b0c-b96f-327014b6fb82/volumes" Feb 17 14:39:47 crc kubenswrapper[4836]: I0217 14:39:47.036703 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8fba-account-create-update-gqd5n"] Feb 17 14:39:47 crc kubenswrapper[4836]: I0217 14:39:47.047711 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-28f5-account-create-update-74tvm"] Feb 17 14:39:47 crc kubenswrapper[4836]: I0217 14:39:47.061359 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-5h5m9"] Feb 17 14:39:47 crc kubenswrapper[4836]: I0217 14:39:47.076527 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a7c4-account-create-update-qj5lb"] Feb 17 14:39:47 crc kubenswrapper[4836]: I0217 14:39:47.085186 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-28f5-account-create-update-74tvm"] Feb 17 14:39:47 crc kubenswrapper[4836]: I0217 14:39:47.098096 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a7c4-account-create-update-qj5lb"] Feb 17 14:39:47 crc kubenswrapper[4836]: I0217 14:39:47.109001 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-5h5m9"] Feb 17 14:39:47 crc kubenswrapper[4836]: I0217 14:39:47.119437 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8fba-account-create-update-gqd5n"] Feb 17 14:39:48 crc kubenswrapper[4836]: I0217 14:39:48.580853 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0312359b-98a6-49c7-83f1-fb44c679e8aa" path="/var/lib/kubelet/pods/0312359b-98a6-49c7-83f1-fb44c679e8aa/volumes" Feb 17 14:39:48 crc kubenswrapper[4836]: I0217 14:39:48.582007 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b8171da-ad25-4388-9dab-2afc19993d97" path="/var/lib/kubelet/pods/0b8171da-ad25-4388-9dab-2afc19993d97/volumes" Feb 17 14:39:48 crc kubenswrapper[4836]: I0217 14:39:48.582955 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc00367-2940-413d-872a-74d4fa37fc1f" path="/var/lib/kubelet/pods/4dc00367-2940-413d-872a-74d4fa37fc1f/volumes" Feb 17 14:39:48 crc kubenswrapper[4836]: I0217 14:39:48.583730 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d61f8c-4804-49b6-937e-fbaf20aa3ed2" path="/var/lib/kubelet/pods/c7d61f8c-4804-49b6-937e-fbaf20aa3ed2/volumes" Feb 17 14:40:25 crc kubenswrapper[4836]: I0217 14:40:25.909826 4836 scope.go:117] "RemoveContainer" containerID="0b1fdb782cc59c87b5c334a8e29bc01c7def7137ff5e1a24115754176ed4d2ab" Feb 17 14:40:25 crc kubenswrapper[4836]: I0217 14:40:25.940454 4836 scope.go:117] "RemoveContainer" containerID="e2428efba069899bf573bcb1f933d6f640083a8f0e4830cd36751b8b3332488d" Feb 17 14:40:26 crc kubenswrapper[4836]: I0217 14:40:26.006817 4836 scope.go:117] "RemoveContainer" containerID="66b9158b23020b3eaa0a3cea1af11df9fcdac6316e74751284cbec084e23c3a0" Feb 17 14:40:26 crc kubenswrapper[4836]: I0217 14:40:26.059797 4836 scope.go:117] "RemoveContainer" containerID="7dba2d07908548962f40435efa50aed2a21f68c9f55a50ad39cc396d718c6cf2" Feb 17 14:40:26 crc kubenswrapper[4836]: I0217 14:40:26.114284 4836 scope.go:117] "RemoveContainer" containerID="b40337010298624b5f124e89e37fbded22f8ac5a672bad50ecf9c49dfa1ed535" Feb 17 14:40:26 crc kubenswrapper[4836]: I0217 14:40:26.168145 4836 scope.go:117] "RemoveContainer" containerID="940b27e8f09ea23f3f385f55c83e9233f241038d9dc1c8761036c1c3dbf2e000" Feb 17 14:40:26 crc kubenswrapper[4836]: I0217 14:40:26.210973 4836 scope.go:117] "RemoveContainer" containerID="b099deccdc43aaaf5e1d9673615b93cdbff588beb42726f387dc2c0ef267fb73" Feb 17 14:40:26 crc kubenswrapper[4836]: I0217 14:40:26.241692 4836 scope.go:117] "RemoveContainer" containerID="a870dbadddedc2cd296e8c04a81b16817f6df39787b8061ee58f3dfc1fec3ca8" Feb 17 14:40:35 crc kubenswrapper[4836]: I0217 14:40:35.051365 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-896gw"] Feb 17 14:40:35 crc kubenswrapper[4836]: I0217 14:40:35.067074 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-896gw"] Feb 17 14:40:35 crc kubenswrapper[4836]: I0217 14:40:35.822985 4836 generic.go:334] "Generic (PLEG): container finished" podID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerID="f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d" exitCode=0 Feb 17 14:40:35 crc kubenswrapper[4836]: I0217 14:40:35.823047 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snsbl/must-gather-4sqf7" event={"ID":"781729f0-fe27-45e7-bd7b-23709696ec4d","Type":"ContainerDied","Data":"f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d"} Feb 17 14:40:35 crc kubenswrapper[4836]: I0217 14:40:35.823950 4836 scope.go:117] "RemoveContainer" containerID="f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d" Feb 17 14:40:36 crc kubenswrapper[4836]: I0217 14:40:36.495618 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-snsbl_must-gather-4sqf7_781729f0-fe27-45e7-bd7b-23709696ec4d/gather/0.log" Feb 17 14:40:36 crc kubenswrapper[4836]: I0217 14:40:36.584380 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5284ac65-3629-4b0f-94ce-114964fe6d15" path="/var/lib/kubelet/pods/5284ac65-3629-4b0f-94ce-114964fe6d15/volumes" Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.243225 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-snsbl/must-gather-4sqf7"] Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.245452 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-snsbl/must-gather-4sqf7" podUID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerName="copy" containerID="cri-o://07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67" gracePeriod=2 Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.258108 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-snsbl/must-gather-4sqf7"] Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.808316 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-snsbl_must-gather-4sqf7_781729f0-fe27-45e7-bd7b-23709696ec4d/copy/0.log" Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.809379 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.942578 4836 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-snsbl_must-gather-4sqf7_781729f0-fe27-45e7-bd7b-23709696ec4d/copy/0.log" Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.943638 4836 generic.go:334] "Generic (PLEG): container finished" podID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerID="07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67" exitCode=143 Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.943711 4836 scope.go:117] "RemoveContainer" containerID="07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67" Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.943957 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snsbl/must-gather-4sqf7" Feb 17 14:40:45 crc kubenswrapper[4836]: I0217 14:40:45.991124 4836 scope.go:117] "RemoveContainer" containerID="f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.004434 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/781729f0-fe27-45e7-bd7b-23709696ec4d-must-gather-output\") pod \"781729f0-fe27-45e7-bd7b-23709696ec4d\" (UID: \"781729f0-fe27-45e7-bd7b-23709696ec4d\") " Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.004769 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6j6w\" (UniqueName: \"kubernetes.io/projected/781729f0-fe27-45e7-bd7b-23709696ec4d-kube-api-access-t6j6w\") pod \"781729f0-fe27-45e7-bd7b-23709696ec4d\" (UID: \"781729f0-fe27-45e7-bd7b-23709696ec4d\") " Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.015688 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781729f0-fe27-45e7-bd7b-23709696ec4d-kube-api-access-t6j6w" (OuterVolumeSpecName: "kube-api-access-t6j6w") pod "781729f0-fe27-45e7-bd7b-23709696ec4d" (UID: "781729f0-fe27-45e7-bd7b-23709696ec4d"). InnerVolumeSpecName "kube-api-access-t6j6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.108123 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6j6w\" (UniqueName: \"kubernetes.io/projected/781729f0-fe27-45e7-bd7b-23709696ec4d-kube-api-access-t6j6w\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.202278 4836 scope.go:117] "RemoveContainer" containerID="07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67" Feb 17 14:40:46 crc kubenswrapper[4836]: E0217 14:40:46.203760 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67\": container with ID starting with 07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67 not found: ID does not exist" containerID="07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.203812 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67"} err="failed to get container status \"07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67\": rpc error: code = NotFound desc = could not find container \"07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67\": container with ID starting with 07b49b7ed15d428bdf610ee0e143b812aa84072521d68f153a7b1e680fa48f67 not found: ID does not exist" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.203845 4836 scope.go:117] "RemoveContainer" containerID="f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d" Feb 17 14:40:46 crc kubenswrapper[4836]: E0217 14:40:46.210048 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d\": container with ID starting with f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d not found: ID does not exist" containerID="f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.211467 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d"} err="failed to get container status \"f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d\": rpc error: code = NotFound desc = could not find container \"f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d\": container with ID starting with f31755722b53fa2304e51f9abd859d08ebfbf46cba2b7c60b902e42814f8b35d not found: ID does not exist" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.335087 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/781729f0-fe27-45e7-bd7b-23709696ec4d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "781729f0-fe27-45e7-bd7b-23709696ec4d" (UID: "781729f0-fe27-45e7-bd7b-23709696ec4d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.415077 4836 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/781729f0-fe27-45e7-bd7b-23709696ec4d-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:46 crc kubenswrapper[4836]: I0217 14:40:46.589147 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781729f0-fe27-45e7-bd7b-23709696ec4d" path="/var/lib/kubelet/pods/781729f0-fe27-45e7-bd7b-23709696ec4d/volumes" Feb 17 14:41:15 crc kubenswrapper[4836]: I0217 14:41:15.048348 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bz94v"] Feb 17 14:41:15 crc kubenswrapper[4836]: I0217 14:41:15.057972 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bz94v"] Feb 17 14:41:16 crc kubenswrapper[4836]: I0217 14:41:16.036148 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lqvvn"] Feb 17 14:41:16 crc kubenswrapper[4836]: I0217 14:41:16.044950 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lqvvn"] Feb 17 14:41:16 crc kubenswrapper[4836]: I0217 14:41:16.581672 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9d6a93-3d3a-4c5c-85cf-329209cfe911" path="/var/lib/kubelet/pods/3f9d6a93-3d3a-4c5c-85cf-329209cfe911/volumes" Feb 17 14:41:16 crc kubenswrapper[4836]: I0217 14:41:16.582428 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790a788c-3cfe-49c8-b1ff-a83bcedf17e0" path="/var/lib/kubelet/pods/790a788c-3cfe-49c8-b1ff-a83bcedf17e0/volumes" Feb 17 14:41:26 crc kubenswrapper[4836]: I0217 14:41:26.436819 4836 scope.go:117] "RemoveContainer" containerID="959d5cc1d8ba4d131ae83ee3b420db014e052fb98b3a6fa5c53753ae63d88003" Feb 17 14:41:26 crc kubenswrapper[4836]: I0217 14:41:26.488433 4836 scope.go:117] "RemoveContainer" containerID="9a55578dc34e67ce0a93dbbd5c5e496ed951f38d462ffb4dcccf5ec23897e1c5" Feb 17 14:41:26 crc kubenswrapper[4836]: I0217 14:41:26.564305 4836 scope.go:117] "RemoveContainer" containerID="c224cbe49994301a8cf7d7e85623916f9815d0873ee461d723b64e1a3b753f8d" Feb 17 14:41:29 crc kubenswrapper[4836]: I0217 14:41:29.765324 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:41:29 crc kubenswrapper[4836]: I0217 14:41:29.765814 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:41:59 crc kubenswrapper[4836]: I0217 14:41:59.765240 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:41:59 crc kubenswrapper[4836]: I0217 14:41:59.766013 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:42:01 crc kubenswrapper[4836]: I0217 14:42:01.075610 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-h4mlr"] Feb 17 14:42:01 crc kubenswrapper[4836]: I0217 14:42:01.086081 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-h4mlr"] Feb 17 14:42:02 crc kubenswrapper[4836]: I0217 14:42:02.587799 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079f20c9-f742-4c4b-a8c0-a2a09573bf62" path="/var/lib/kubelet/pods/079f20c9-f742-4c4b-a8c0-a2a09573bf62/volumes" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.352894 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7rqln"] Feb 17 14:42:04 crc kubenswrapper[4836]: E0217 14:42:04.354306 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerName="copy" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.354330 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerName="copy" Feb 17 14:42:04 crc kubenswrapper[4836]: E0217 14:42:04.354377 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerName="gather" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.354385 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerName="gather" Feb 17 14:42:04 crc kubenswrapper[4836]: E0217 14:42:04.354518 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf" containerName="container-00" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.354535 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf" containerName="container-00" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.354769 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerName="gather" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.354788 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d11c6f6-7a22-40b4-b7ba-e0cb46e0cecf" containerName="container-00" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.354818 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="781729f0-fe27-45e7-bd7b-23709696ec4d" containerName="copy" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.360100 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.386747 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7rqln"] Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.470371 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-catalog-content\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.470734 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-utilities\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.471001 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxnbz\" (UniqueName: \"kubernetes.io/projected/5bcd4960-7859-4e31-829d-e737ae014f31-kube-api-access-qxnbz\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.573513 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxnbz\" (UniqueName: \"kubernetes.io/projected/5bcd4960-7859-4e31-829d-e737ae014f31-kube-api-access-qxnbz\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.573636 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-catalog-content\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.573729 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-utilities\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.574420 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-utilities\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.574450 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-catalog-content\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.600886 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxnbz\" (UniqueName: \"kubernetes.io/projected/5bcd4960-7859-4e31-829d-e737ae014f31-kube-api-access-qxnbz\") pod \"redhat-operators-7rqln\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:04 crc kubenswrapper[4836]: I0217 14:42:04.701477 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:05 crc kubenswrapper[4836]: I0217 14:42:05.278378 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7rqln"] Feb 17 14:42:06 crc kubenswrapper[4836]: I0217 14:42:06.216275 4836 generic.go:334] "Generic (PLEG): container finished" podID="5bcd4960-7859-4e31-829d-e737ae014f31" containerID="065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f" exitCode=0 Feb 17 14:42:06 crc kubenswrapper[4836]: I0217 14:42:06.216352 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rqln" event={"ID":"5bcd4960-7859-4e31-829d-e737ae014f31","Type":"ContainerDied","Data":"065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f"} Feb 17 14:42:06 crc kubenswrapper[4836]: I0217 14:42:06.216398 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rqln" event={"ID":"5bcd4960-7859-4e31-829d-e737ae014f31","Type":"ContainerStarted","Data":"3b2d2320c7cbb136eafb357b4ff7cfbbe5c583adde16eb7ed6a081a0f7bec0b0"} Feb 17 14:42:06 crc kubenswrapper[4836]: I0217 14:42:06.219565 4836 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:42:07 crc kubenswrapper[4836]: I0217 14:42:07.248278 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rqln" event={"ID":"5bcd4960-7859-4e31-829d-e737ae014f31","Type":"ContainerStarted","Data":"ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba"} Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.537043 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s96zg"] Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.541134 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.553412 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s96zg"] Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.642852 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-769bf\" (UniqueName: \"kubernetes.io/projected/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-kube-api-access-769bf\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.644129 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-utilities\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.644322 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-catalog-content\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.746404 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-utilities\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.746521 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-catalog-content\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.746694 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-769bf\" (UniqueName: \"kubernetes.io/projected/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-kube-api-access-769bf\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.746963 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-utilities\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.747085 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-catalog-content\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.780437 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-769bf\" (UniqueName: \"kubernetes.io/projected/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-kube-api-access-769bf\") pod \"certified-operators-s96zg\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:09 crc kubenswrapper[4836]: I0217 14:42:09.863189 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:10 crc kubenswrapper[4836]: W0217 14:42:10.445205 4836 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782abdb8_014c_4d56_a7c7_a5ffb8a8e609.slice/crio-52d2f884fbcb1f46af948eaf3e822be773031f979f684862e6d76af9c0054264 WatchSource:0}: Error finding container 52d2f884fbcb1f46af948eaf3e822be773031f979f684862e6d76af9c0054264: Status 404 returned error can't find the container with id 52d2f884fbcb1f46af948eaf3e822be773031f979f684862e6d76af9c0054264 Feb 17 14:42:10 crc kubenswrapper[4836]: I0217 14:42:10.446176 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s96zg"] Feb 17 14:42:11 crc kubenswrapper[4836]: I0217 14:42:11.303691 4836 generic.go:334] "Generic (PLEG): container finished" podID="5bcd4960-7859-4e31-829d-e737ae014f31" containerID="ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba" exitCode=0 Feb 17 14:42:11 crc kubenswrapper[4836]: I0217 14:42:11.303794 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rqln" event={"ID":"5bcd4960-7859-4e31-829d-e737ae014f31","Type":"ContainerDied","Data":"ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba"} Feb 17 14:42:11 crc kubenswrapper[4836]: I0217 14:42:11.308476 4836 generic.go:334] "Generic (PLEG): container finished" podID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerID="383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76" exitCode=0 Feb 17 14:42:11 crc kubenswrapper[4836]: I0217 14:42:11.308546 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s96zg" event={"ID":"782abdb8-014c-4d56-a7c7-a5ffb8a8e609","Type":"ContainerDied","Data":"383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76"} Feb 17 14:42:11 crc kubenswrapper[4836]: I0217 14:42:11.308603 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s96zg" event={"ID":"782abdb8-014c-4d56-a7c7-a5ffb8a8e609","Type":"ContainerStarted","Data":"52d2f884fbcb1f46af948eaf3e822be773031f979f684862e6d76af9c0054264"} Feb 17 14:42:12 crc kubenswrapper[4836]: I0217 14:42:12.321179 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rqln" event={"ID":"5bcd4960-7859-4e31-829d-e737ae014f31","Type":"ContainerStarted","Data":"7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f"} Feb 17 14:42:12 crc kubenswrapper[4836]: I0217 14:42:12.325194 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s96zg" event={"ID":"782abdb8-014c-4d56-a7c7-a5ffb8a8e609","Type":"ContainerStarted","Data":"6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87"} Feb 17 14:42:12 crc kubenswrapper[4836]: I0217 14:42:12.349583 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7rqln" podStartSLOduration=2.8499781029999998 podStartE2EDuration="8.349536169s" podCreationTimestamp="2026-02-17 14:42:04 +0000 UTC" firstStartedPulling="2026-02-17 14:42:06.219172923 +0000 UTC m=+2152.562101192" lastFinishedPulling="2026-02-17 14:42:11.718730969 +0000 UTC m=+2158.061659258" observedRunningTime="2026-02-17 14:42:12.341547613 +0000 UTC m=+2158.684475902" watchObservedRunningTime="2026-02-17 14:42:12.349536169 +0000 UTC m=+2158.692464438" Feb 17 14:42:14 crc kubenswrapper[4836]: I0217 14:42:14.701684 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:14 crc kubenswrapper[4836]: I0217 14:42:14.702132 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:15 crc kubenswrapper[4836]: I0217 14:42:15.355252 4836 generic.go:334] "Generic (PLEG): container finished" podID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerID="6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87" exitCode=0 Feb 17 14:42:15 crc kubenswrapper[4836]: I0217 14:42:15.355292 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s96zg" event={"ID":"782abdb8-014c-4d56-a7c7-a5ffb8a8e609","Type":"ContainerDied","Data":"6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87"} Feb 17 14:42:15 crc kubenswrapper[4836]: I0217 14:42:15.910792 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7rqln" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="registry-server" probeResult="failure" output=< Feb 17 14:42:15 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:42:15 crc kubenswrapper[4836]: > Feb 17 14:42:16 crc kubenswrapper[4836]: I0217 14:42:16.368773 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s96zg" event={"ID":"782abdb8-014c-4d56-a7c7-a5ffb8a8e609","Type":"ContainerStarted","Data":"4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52"} Feb 17 14:42:16 crc kubenswrapper[4836]: I0217 14:42:16.392899 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s96zg" podStartSLOduration=2.592563839 podStartE2EDuration="7.392868348s" podCreationTimestamp="2026-02-17 14:42:09 +0000 UTC" firstStartedPulling="2026-02-17 14:42:11.311179991 +0000 UTC m=+2157.654108260" lastFinishedPulling="2026-02-17 14:42:16.1114845 +0000 UTC m=+2162.454412769" observedRunningTime="2026-02-17 14:42:16.387548444 +0000 UTC m=+2162.730476723" watchObservedRunningTime="2026-02-17 14:42:16.392868348 +0000 UTC m=+2162.735796617" Feb 17 14:42:19 crc kubenswrapper[4836]: I0217 14:42:19.863912 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:19 crc kubenswrapper[4836]: I0217 14:42:19.864270 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:20 crc kubenswrapper[4836]: I0217 14:42:20.914574 4836 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s96zg" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="registry-server" probeResult="failure" output=< Feb 17 14:42:20 crc kubenswrapper[4836]: timeout: failed to connect service ":50051" within 1s Feb 17 14:42:20 crc kubenswrapper[4836]: > Feb 17 14:42:24 crc kubenswrapper[4836]: I0217 14:42:24.759625 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:24 crc kubenswrapper[4836]: I0217 14:42:24.821186 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:25 crc kubenswrapper[4836]: I0217 14:42:25.002539 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7rqln"] Feb 17 14:42:26 crc kubenswrapper[4836]: I0217 14:42:26.469415 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7rqln" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="registry-server" containerID="cri-o://7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f" gracePeriod=2 Feb 17 14:42:26 crc kubenswrapper[4836]: I0217 14:42:26.783132 4836 scope.go:117] "RemoveContainer" containerID="6d24e9f78b938b24616765924395f09dc01b17f432bd2a5ca96dd30f763b95e2" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.154798 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.212800 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxnbz\" (UniqueName: \"kubernetes.io/projected/5bcd4960-7859-4e31-829d-e737ae014f31-kube-api-access-qxnbz\") pod \"5bcd4960-7859-4e31-829d-e737ae014f31\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.213175 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-catalog-content\") pod \"5bcd4960-7859-4e31-829d-e737ae014f31\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.213238 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-utilities\") pod \"5bcd4960-7859-4e31-829d-e737ae014f31\" (UID: \"5bcd4960-7859-4e31-829d-e737ae014f31\") " Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.214420 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-utilities" (OuterVolumeSpecName: "utilities") pod "5bcd4960-7859-4e31-829d-e737ae014f31" (UID: "5bcd4960-7859-4e31-829d-e737ae014f31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.221132 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bcd4960-7859-4e31-829d-e737ae014f31-kube-api-access-qxnbz" (OuterVolumeSpecName: "kube-api-access-qxnbz") pod "5bcd4960-7859-4e31-829d-e737ae014f31" (UID: "5bcd4960-7859-4e31-829d-e737ae014f31"). InnerVolumeSpecName "kube-api-access-qxnbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.317265 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.317348 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxnbz\" (UniqueName: \"kubernetes.io/projected/5bcd4960-7859-4e31-829d-e737ae014f31-kube-api-access-qxnbz\") on node \"crc\" DevicePath \"\"" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.370648 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bcd4960-7859-4e31-829d-e737ae014f31" (UID: "5bcd4960-7859-4e31-829d-e737ae014f31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.419947 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcd4960-7859-4e31-829d-e737ae014f31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.482142 4836 generic.go:334] "Generic (PLEG): container finished" podID="5bcd4960-7859-4e31-829d-e737ae014f31" containerID="7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f" exitCode=0 Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.482205 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rqln" event={"ID":"5bcd4960-7859-4e31-829d-e737ae014f31","Type":"ContainerDied","Data":"7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f"} Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.482243 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rqln" event={"ID":"5bcd4960-7859-4e31-829d-e737ae014f31","Type":"ContainerDied","Data":"3b2d2320c7cbb136eafb357b4ff7cfbbe5c583adde16eb7ed6a081a0f7bec0b0"} Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.482265 4836 scope.go:117] "RemoveContainer" containerID="7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.482457 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rqln" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.527797 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7rqln"] Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.529239 4836 scope.go:117] "RemoveContainer" containerID="ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.538783 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7rqln"] Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.568693 4836 scope.go:117] "RemoveContainer" containerID="065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.599590 4836 scope.go:117] "RemoveContainer" containerID="7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f" Feb 17 14:42:27 crc kubenswrapper[4836]: E0217 14:42:27.599861 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f\": container with ID starting with 7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f not found: ID does not exist" containerID="7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.599900 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f"} err="failed to get container status \"7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f\": rpc error: code = NotFound desc = could not find container \"7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f\": container with ID starting with 7717a65aa7f0f8e6a643858620b6dd5eb2c1e34f51d5b5756f90c00fc814cd9f not found: ID does not exist" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.599926 4836 scope.go:117] "RemoveContainer" containerID="ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba" Feb 17 14:42:27 crc kubenswrapper[4836]: E0217 14:42:27.600288 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba\": container with ID starting with ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba not found: ID does not exist" containerID="ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.600325 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba"} err="failed to get container status \"ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba\": rpc error: code = NotFound desc = could not find container \"ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba\": container with ID starting with ecec317ab07846296a1224db623c16f7c4e0c50365e2c51ebe73d09f24a315ba not found: ID does not exist" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.600338 4836 scope.go:117] "RemoveContainer" containerID="065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f" Feb 17 14:42:27 crc kubenswrapper[4836]: E0217 14:42:27.600805 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f\": container with ID starting with 065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f not found: ID does not exist" containerID="065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f" Feb 17 14:42:27 crc kubenswrapper[4836]: I0217 14:42:27.600827 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f"} err="failed to get container status \"065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f\": rpc error: code = NotFound desc = could not find container \"065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f\": container with ID starting with 065f0b85b5c506e038b1ad3e7531809387c1f19b7e964615c77473205ec24a4f not found: ID does not exist" Feb 17 14:42:28 crc kubenswrapper[4836]: I0217 14:42:28.610733 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" path="/var/lib/kubelet/pods/5bcd4960-7859-4e31-829d-e737ae014f31/volumes" Feb 17 14:42:29 crc kubenswrapper[4836]: I0217 14:42:29.765246 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:42:29 crc kubenswrapper[4836]: I0217 14:42:29.765427 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:42:29 crc kubenswrapper[4836]: I0217 14:42:29.765509 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:42:29 crc kubenswrapper[4836]: I0217 14:42:29.766593 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f19bb9a4d6443b07f247471c35e97a577b83e39d81d033aff596fac57089969"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:42:29 crc kubenswrapper[4836]: I0217 14:42:29.766703 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://6f19bb9a4d6443b07f247471c35e97a577b83e39d81d033aff596fac57089969" gracePeriod=600 Feb 17 14:42:30 crc kubenswrapper[4836]: I0217 14:42:30.041507 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:30 crc kubenswrapper[4836]: I0217 14:42:30.159837 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:30 crc kubenswrapper[4836]: I0217 14:42:30.410896 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s96zg"] Feb 17 14:42:30 crc kubenswrapper[4836]: I0217 14:42:30.517031 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="6f19bb9a4d6443b07f247471c35e97a577b83e39d81d033aff596fac57089969" exitCode=0 Feb 17 14:42:30 crc kubenswrapper[4836]: I0217 14:42:30.517481 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"6f19bb9a4d6443b07f247471c35e97a577b83e39d81d033aff596fac57089969"} Feb 17 14:42:30 crc kubenswrapper[4836]: I0217 14:42:30.517676 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerStarted","Data":"0cd1f125f5a9b4dbb6838481ae07ba424a29fd62723f85d44334ea5eb698c97d"} Feb 17 14:42:30 crc kubenswrapper[4836]: I0217 14:42:30.517730 4836 scope.go:117] "RemoveContainer" containerID="325cd676e21fc52f03c197c519aae517b900944cff0ba106872a3e674c1b1f20" Feb 17 14:42:31 crc kubenswrapper[4836]: I0217 14:42:31.529706 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s96zg" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="registry-server" containerID="cri-o://4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52" gracePeriod=2 Feb 17 14:42:31 crc kubenswrapper[4836]: E0217 14:42:31.795079 4836 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782abdb8_014c_4d56_a7c7_a5ffb8a8e609.slice/crio-conmon-4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782abdb8_014c_4d56_a7c7_a5ffb8a8e609.slice/crio-4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52.scope\": RecentStats: unable to find data in memory cache]" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.095568 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.144893 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-catalog-content\") pod \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.145632 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-utilities\") pod \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.145948 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-769bf\" (UniqueName: \"kubernetes.io/projected/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-kube-api-access-769bf\") pod \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\" (UID: \"782abdb8-014c-4d56-a7c7-a5ffb8a8e609\") " Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.146604 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-utilities" (OuterVolumeSpecName: "utilities") pod "782abdb8-014c-4d56-a7c7-a5ffb8a8e609" (UID: "782abdb8-014c-4d56-a7c7-a5ffb8a8e609"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.147453 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.162699 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-kube-api-access-769bf" (OuterVolumeSpecName: "kube-api-access-769bf") pod "782abdb8-014c-4d56-a7c7-a5ffb8a8e609" (UID: "782abdb8-014c-4d56-a7c7-a5ffb8a8e609"). InnerVolumeSpecName "kube-api-access-769bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.209043 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "782abdb8-014c-4d56-a7c7-a5ffb8a8e609" (UID: "782abdb8-014c-4d56-a7c7-a5ffb8a8e609"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.253289 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-769bf\" (UniqueName: \"kubernetes.io/projected/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-kube-api-access-769bf\") on node \"crc\" DevicePath \"\"" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.253367 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782abdb8-014c-4d56-a7c7-a5ffb8a8e609-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.541716 4836 generic.go:334] "Generic (PLEG): container finished" podID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerID="4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52" exitCode=0 Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.541784 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s96zg" event={"ID":"782abdb8-014c-4d56-a7c7-a5ffb8a8e609","Type":"ContainerDied","Data":"4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52"} Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.541850 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s96zg" event={"ID":"782abdb8-014c-4d56-a7c7-a5ffb8a8e609","Type":"ContainerDied","Data":"52d2f884fbcb1f46af948eaf3e822be773031f979f684862e6d76af9c0054264"} Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.541873 4836 scope.go:117] "RemoveContainer" containerID="4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.541926 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s96zg" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.566520 4836 scope.go:117] "RemoveContainer" containerID="6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.591192 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s96zg"] Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.593426 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s96zg"] Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.602256 4836 scope.go:117] "RemoveContainer" containerID="383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.650834 4836 scope.go:117] "RemoveContainer" containerID="4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52" Feb 17 14:42:32 crc kubenswrapper[4836]: E0217 14:42:32.651502 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52\": container with ID starting with 4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52 not found: ID does not exist" containerID="4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.651563 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52"} err="failed to get container status \"4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52\": rpc error: code = NotFound desc = could not find container \"4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52\": container with ID starting with 4a1b17ed59affd7deaa2032e53a2dda11bfb17725f3a69ef2d57bc516fd6ce52 not found: ID does not exist" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.651596 4836 scope.go:117] "RemoveContainer" containerID="6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87" Feb 17 14:42:32 crc kubenswrapper[4836]: E0217 14:42:32.656283 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87\": container with ID starting with 6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87 not found: ID does not exist" containerID="6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.656349 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87"} err="failed to get container status \"6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87\": rpc error: code = NotFound desc = could not find container \"6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87\": container with ID starting with 6bf27dcf1f51f29080696e75835d07c03e3aa2670a5cd4ab04af4eea98086d87 not found: ID does not exist" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.656384 4836 scope.go:117] "RemoveContainer" containerID="383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76" Feb 17 14:42:32 crc kubenswrapper[4836]: E0217 14:42:32.656951 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76\": container with ID starting with 383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76 not found: ID does not exist" containerID="383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76" Feb 17 14:42:32 crc kubenswrapper[4836]: I0217 14:42:32.656982 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76"} err="failed to get container status \"383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76\": rpc error: code = NotFound desc = could not find container \"383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76\": container with ID starting with 383929ae316c24d0ac7b145939152ce6f201b6a654e5356dc808be6253f6bf76 not found: ID does not exist" Feb 17 14:42:34 crc kubenswrapper[4836]: I0217 14:42:34.584043 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" path="/var/lib/kubelet/pods/782abdb8-014c-4d56-a7c7-a5ffb8a8e609/volumes" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.720337 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l8n5s"] Feb 17 14:42:58 crc kubenswrapper[4836]: E0217 14:42:58.721278 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="extract-content" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.721319 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="extract-content" Feb 17 14:42:58 crc kubenswrapper[4836]: E0217 14:42:58.721339 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="extract-utilities" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.721346 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="extract-utilities" Feb 17 14:42:58 crc kubenswrapper[4836]: E0217 14:42:58.721359 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="registry-server" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.721365 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="registry-server" Feb 17 14:42:58 crc kubenswrapper[4836]: E0217 14:42:58.721377 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="extract-utilities" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.721383 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="extract-utilities" Feb 17 14:42:58 crc kubenswrapper[4836]: E0217 14:42:58.721393 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="extract-content" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.721399 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="extract-content" Feb 17 14:42:58 crc kubenswrapper[4836]: E0217 14:42:58.721420 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="registry-server" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.721425 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="registry-server" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.721652 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bcd4960-7859-4e31-829d-e737ae014f31" containerName="registry-server" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.721665 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="782abdb8-014c-4d56-a7c7-a5ffb8a8e609" containerName="registry-server" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.723308 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.749512 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8n5s"] Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.901161 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2w8s\" (UniqueName: \"kubernetes.io/projected/1362b4e0-d576-4cc3-b60d-22dc164d36e6-kube-api-access-g2w8s\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.901420 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-utilities\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:58 crc kubenswrapper[4836]: I0217 14:42:58.902037 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-catalog-content\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.009635 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-utilities\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.009828 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-catalog-content\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.009878 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2w8s\" (UniqueName: \"kubernetes.io/projected/1362b4e0-d576-4cc3-b60d-22dc164d36e6-kube-api-access-g2w8s\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.011404 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-utilities\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.011666 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-catalog-content\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.057890 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2w8s\" (UniqueName: \"kubernetes.io/projected/1362b4e0-d576-4cc3-b60d-22dc164d36e6-kube-api-access-g2w8s\") pod \"redhat-marketplace-l8n5s\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.199759 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.696215 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8n5s"] Feb 17 14:42:59 crc kubenswrapper[4836]: I0217 14:42:59.918733 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8n5s" event={"ID":"1362b4e0-d576-4cc3-b60d-22dc164d36e6","Type":"ContainerStarted","Data":"5e39044fdc945b0ccefd2f1703920c43f3a5e055cf64150d9a50d4dcc1955e22"} Feb 17 14:43:00 crc kubenswrapper[4836]: I0217 14:43:00.931506 4836 generic.go:334] "Generic (PLEG): container finished" podID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerID="c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475" exitCode=0 Feb 17 14:43:00 crc kubenswrapper[4836]: I0217 14:43:00.931612 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8n5s" event={"ID":"1362b4e0-d576-4cc3-b60d-22dc164d36e6","Type":"ContainerDied","Data":"c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475"} Feb 17 14:43:02 crc kubenswrapper[4836]: I0217 14:43:02.983523 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8n5s" event={"ID":"1362b4e0-d576-4cc3-b60d-22dc164d36e6","Type":"ContainerStarted","Data":"446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f"} Feb 17 14:43:03 crc kubenswrapper[4836]: I0217 14:43:03.997362 4836 generic.go:334] "Generic (PLEG): container finished" podID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerID="446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f" exitCode=0 Feb 17 14:43:03 crc kubenswrapper[4836]: I0217 14:43:03.997485 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8n5s" event={"ID":"1362b4e0-d576-4cc3-b60d-22dc164d36e6","Type":"ContainerDied","Data":"446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f"} Feb 17 14:43:05 crc kubenswrapper[4836]: I0217 14:43:05.012086 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8n5s" event={"ID":"1362b4e0-d576-4cc3-b60d-22dc164d36e6","Type":"ContainerStarted","Data":"c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f"} Feb 17 14:43:05 crc kubenswrapper[4836]: I0217 14:43:05.031515 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l8n5s" podStartSLOduration=3.5235011419999998 podStartE2EDuration="7.031466629s" podCreationTimestamp="2026-02-17 14:42:58 +0000 UTC" firstStartedPulling="2026-02-17 14:43:00.93427636 +0000 UTC m=+2207.277204629" lastFinishedPulling="2026-02-17 14:43:04.442241837 +0000 UTC m=+2210.785170116" observedRunningTime="2026-02-17 14:43:05.030155874 +0000 UTC m=+2211.373084163" watchObservedRunningTime="2026-02-17 14:43:05.031466629 +0000 UTC m=+2211.374394898" Feb 17 14:43:09 crc kubenswrapper[4836]: I0217 14:43:09.200845 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:43:09 crc kubenswrapper[4836]: I0217 14:43:09.201564 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:43:09 crc kubenswrapper[4836]: I0217 14:43:09.253827 4836 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:43:10 crc kubenswrapper[4836]: I0217 14:43:10.108402 4836 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:43:10 crc kubenswrapper[4836]: I0217 14:43:10.160770 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8n5s"] Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.078473 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l8n5s" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerName="registry-server" containerID="cri-o://c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f" gracePeriod=2 Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.762204 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.830564 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-utilities\") pod \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.830761 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-catalog-content\") pod \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.831153 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2w8s\" (UniqueName: \"kubernetes.io/projected/1362b4e0-d576-4cc3-b60d-22dc164d36e6-kube-api-access-g2w8s\") pod \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\" (UID: \"1362b4e0-d576-4cc3-b60d-22dc164d36e6\") " Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.832008 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-utilities" (OuterVolumeSpecName: "utilities") pod "1362b4e0-d576-4cc3-b60d-22dc164d36e6" (UID: "1362b4e0-d576-4cc3-b60d-22dc164d36e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.832288 4836 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.837686 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1362b4e0-d576-4cc3-b60d-22dc164d36e6-kube-api-access-g2w8s" (OuterVolumeSpecName: "kube-api-access-g2w8s") pod "1362b4e0-d576-4cc3-b60d-22dc164d36e6" (UID: "1362b4e0-d576-4cc3-b60d-22dc164d36e6"). InnerVolumeSpecName "kube-api-access-g2w8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.933460 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2w8s\" (UniqueName: \"kubernetes.io/projected/1362b4e0-d576-4cc3-b60d-22dc164d36e6-kube-api-access-g2w8s\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:12 crc kubenswrapper[4836]: I0217 14:43:12.983731 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1362b4e0-d576-4cc3-b60d-22dc164d36e6" (UID: "1362b4e0-d576-4cc3-b60d-22dc164d36e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.034978 4836 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362b4e0-d576-4cc3-b60d-22dc164d36e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.243969 4836 generic.go:334] "Generic (PLEG): container finished" podID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerID="c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f" exitCode=0 Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.244026 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8n5s" event={"ID":"1362b4e0-d576-4cc3-b60d-22dc164d36e6","Type":"ContainerDied","Data":"c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f"} Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.244061 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8n5s" event={"ID":"1362b4e0-d576-4cc3-b60d-22dc164d36e6","Type":"ContainerDied","Data":"5e39044fdc945b0ccefd2f1703920c43f3a5e055cf64150d9a50d4dcc1955e22"} Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.244079 4836 scope.go:117] "RemoveContainer" containerID="c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.245009 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8n5s" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.263399 4836 scope.go:117] "RemoveContainer" containerID="446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.283678 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8n5s"] Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.293157 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8n5s"] Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.305358 4836 scope.go:117] "RemoveContainer" containerID="c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.339125 4836 scope.go:117] "RemoveContainer" containerID="c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f" Feb 17 14:43:13 crc kubenswrapper[4836]: E0217 14:43:13.339928 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f\": container with ID starting with c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f not found: ID does not exist" containerID="c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.340006 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f"} err="failed to get container status \"c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f\": rpc error: code = NotFound desc = could not find container \"c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f\": container with ID starting with c3fbd162f9e80809d3ba31e6ab607c5f8d8bee1d68434fdee175237d6d20554f not found: ID does not exist" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.340067 4836 scope.go:117] "RemoveContainer" containerID="446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f" Feb 17 14:43:13 crc kubenswrapper[4836]: E0217 14:43:13.340770 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f\": container with ID starting with 446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f not found: ID does not exist" containerID="446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.340815 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f"} err="failed to get container status \"446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f\": rpc error: code = NotFound desc = could not find container \"446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f\": container with ID starting with 446d8b49f1c41afa000152dee4447afdd73a376b18b18225416d88792c05eb9f not found: ID does not exist" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.340843 4836 scope.go:117] "RemoveContainer" containerID="c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475" Feb 17 14:43:13 crc kubenswrapper[4836]: E0217 14:43:13.343720 4836 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475\": container with ID starting with c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475 not found: ID does not exist" containerID="c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475" Feb 17 14:43:13 crc kubenswrapper[4836]: I0217 14:43:13.343782 4836 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475"} err="failed to get container status \"c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475\": rpc error: code = NotFound desc = could not find container \"c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475\": container with ID starting with c6cfe2a894550e4dfa97d33e7a8c45bba54a5b3f31c99c78437667f405675475 not found: ID does not exist" Feb 17 14:43:14 crc kubenswrapper[4836]: I0217 14:43:14.585468 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" path="/var/lib/kubelet/pods/1362b4e0-d576-4cc3-b60d-22dc164d36e6/volumes" Feb 17 14:44:59 crc kubenswrapper[4836]: I0217 14:44:59.765314 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:44:59 crc kubenswrapper[4836]: I0217 14:44:59.766091 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.360011 4836 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm"] Feb 17 14:45:00 crc kubenswrapper[4836]: E0217 14:45:00.360774 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerName="extract-utilities" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.360789 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerName="extract-utilities" Feb 17 14:45:00 crc kubenswrapper[4836]: E0217 14:45:00.360804 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerName="extract-content" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.360810 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerName="extract-content" Feb 17 14:45:00 crc kubenswrapper[4836]: E0217 14:45:00.360821 4836 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerName="registry-server" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.360827 4836 state_mem.go:107] "Deleted CPUSet assignment" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerName="registry-server" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.361036 4836 memory_manager.go:354] "RemoveStaleState removing state" podUID="1362b4e0-d576-4cc3-b60d-22dc164d36e6" containerName="registry-server" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.362174 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.367072 4836 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.367582 4836 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.381323 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-config-volume\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.381449 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grkhq\" (UniqueName: \"kubernetes.io/projected/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-kube-api-access-grkhq\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.381564 4836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-secret-volume\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.384366 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm"] Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.484100 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-config-volume\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.484245 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grkhq\" (UniqueName: \"kubernetes.io/projected/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-kube-api-access-grkhq\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.484404 4836 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-secret-volume\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.485537 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-config-volume\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.496046 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-secret-volume\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.511991 4836 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grkhq\" (UniqueName: \"kubernetes.io/projected/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-kube-api-access-grkhq\") pod \"collect-profiles-29522325-6z9vm\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:00 crc kubenswrapper[4836]: I0217 14:45:00.711931 4836 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:01 crc kubenswrapper[4836]: I0217 14:45:01.434931 4836 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm"] Feb 17 14:45:01 crc kubenswrapper[4836]: I0217 14:45:01.802089 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" event={"ID":"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765","Type":"ContainerStarted","Data":"7ca139c0f79ad8f1696fe7f9e1c84b53c156ee1274043c6c3d37f1283049d2db"} Feb 17 14:45:01 crc kubenswrapper[4836]: I0217 14:45:01.802152 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" event={"ID":"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765","Type":"ContainerStarted","Data":"c3be4cefaa64aaba8ee9b719c8fa1372623b7c9d9319f4164769a8ca22750bd7"} Feb 17 14:45:01 crc kubenswrapper[4836]: I0217 14:45:01.835100 4836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" podStartSLOduration=1.835061963 podStartE2EDuration="1.835061963s" podCreationTimestamp="2026-02-17 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:45:01.821637452 +0000 UTC m=+2328.164565731" watchObservedRunningTime="2026-02-17 14:45:01.835061963 +0000 UTC m=+2328.177990242" Feb 17 14:45:02 crc kubenswrapper[4836]: I0217 14:45:02.813732 4836 generic.go:334] "Generic (PLEG): container finished" podID="9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765" containerID="7ca139c0f79ad8f1696fe7f9e1c84b53c156ee1274043c6c3d37f1283049d2db" exitCode=0 Feb 17 14:45:02 crc kubenswrapper[4836]: I0217 14:45:02.813794 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" event={"ID":"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765","Type":"ContainerDied","Data":"7ca139c0f79ad8f1696fe7f9e1c84b53c156ee1274043c6c3d37f1283049d2db"} Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.341666 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.483494 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-config-volume\") pod \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.483616 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-secret-volume\") pod \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.483669 4836 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grkhq\" (UniqueName: \"kubernetes.io/projected/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-kube-api-access-grkhq\") pod \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\" (UID: \"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765\") " Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.485883 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-config-volume" (OuterVolumeSpecName: "config-volume") pod "9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765" (UID: "9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.500327 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-kube-api-access-grkhq" (OuterVolumeSpecName: "kube-api-access-grkhq") pod "9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765" (UID: "9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765"). InnerVolumeSpecName "kube-api-access-grkhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.503523 4836 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765" (UID: "9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.542146 4836 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps"] Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.561486 4836 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-c8fps"] Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.586752 4836 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.586809 4836 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.586821 4836 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grkhq\" (UniqueName: \"kubernetes.io/projected/9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765-kube-api-access-grkhq\") on node \"crc\" DevicePath \"\"" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.592538 4836 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91eb437c-beea-4f2d-b3f7-505b87fe6dee" path="/var/lib/kubelet/pods/91eb437c-beea-4f2d-b3f7-505b87fe6dee/volumes" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.838289 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" event={"ID":"9e1a5bfb-4c7e-47f1-9acb-1c7da14e7765","Type":"ContainerDied","Data":"c3be4cefaa64aaba8ee9b719c8fa1372623b7c9d9319f4164769a8ca22750bd7"} Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.838427 4836 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-6z9vm" Feb 17 14:45:04 crc kubenswrapper[4836]: I0217 14:45:04.838354 4836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3be4cefaa64aaba8ee9b719c8fa1372623b7c9d9319f4164769a8ca22750bd7" Feb 17 14:45:26 crc kubenswrapper[4836]: I0217 14:45:26.990954 4836 scope.go:117] "RemoveContainer" containerID="4b8580f44aade0425b4de34e0f49d07bd6192e526f9c10aa11b53556a3546660" Feb 17 14:45:29 crc kubenswrapper[4836]: I0217 14:45:29.765749 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:45:29 crc kubenswrapper[4836]: I0217 14:45:29.766532 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:45:59 crc kubenswrapper[4836]: I0217 14:45:59.765530 4836 patch_prober.go:28] interesting pod/machine-config-daemon-bkk9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:45:59 crc kubenswrapper[4836]: I0217 14:45:59.766183 4836 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:45:59 crc kubenswrapper[4836]: I0217 14:45:59.766257 4836 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" Feb 17 14:45:59 crc kubenswrapper[4836]: I0217 14:45:59.767426 4836 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cd1f125f5a9b4dbb6838481ae07ba424a29fd62723f85d44334ea5eb698c97d"} pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:45:59 crc kubenswrapper[4836]: I0217 14:45:59.767505 4836 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" containerName="machine-config-daemon" containerID="cri-o://0cd1f125f5a9b4dbb6838481ae07ba424a29fd62723f85d44334ea5eb698c97d" gracePeriod=600 Feb 17 14:45:59 crc kubenswrapper[4836]: E0217 14:45:59.900133 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c" Feb 17 14:46:00 crc kubenswrapper[4836]: I0217 14:46:00.474808 4836 generic.go:334] "Generic (PLEG): container finished" podID="895a19c9-a3f0-4a15-aa19-19347121388c" containerID="0cd1f125f5a9b4dbb6838481ae07ba424a29fd62723f85d44334ea5eb698c97d" exitCode=0 Feb 17 14:46:00 crc kubenswrapper[4836]: I0217 14:46:00.474870 4836 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" event={"ID":"895a19c9-a3f0-4a15-aa19-19347121388c","Type":"ContainerDied","Data":"0cd1f125f5a9b4dbb6838481ae07ba424a29fd62723f85d44334ea5eb698c97d"} Feb 17 14:46:00 crc kubenswrapper[4836]: I0217 14:46:00.474913 4836 scope.go:117] "RemoveContainer" containerID="6f19bb9a4d6443b07f247471c35e97a577b83e39d81d033aff596fac57089969" Feb 17 14:46:00 crc kubenswrapper[4836]: I0217 14:46:00.475793 4836 scope.go:117] "RemoveContainer" containerID="0cd1f125f5a9b4dbb6838481ae07ba424a29fd62723f85d44334ea5eb698c97d" Feb 17 14:46:00 crc kubenswrapper[4836]: E0217 14:46:00.476264 4836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bkk9g_openshift-machine-config-operator(895a19c9-a3f0-4a15-aa19-19347121388c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bkk9g" podUID="895a19c9-a3f0-4a15-aa19-19347121388c"